Blog

  • services

    services

    helpwave’s microservices

    Getting started

    Note

    Make sure you have Git, Docker and Docker Compose
    installed on your system.

    Note

    There seem to be service-discovery issues on Windows Hosts.

    Using Docker Compose next to your IDE

    Using this setup, you use your editor of choice on your system and start the docker compose stack next to it.
    The main downside to this is that your IDE will use your local toolchain and dependencies, which may not be in sync.

    Tip

    ./up.sh -d starts the compose stack in the background

    1. Clone the repo locally
      $ git clone git@github.com:helpwave/services.git
    2. Start the docker compose stack using the $ ./up.sh script
    3. Start your IDE of choice and start hacking!
    4. You can use the tools installed in the dev docker container by opening a shell on it:

      $ ./dev.sh
      # you are now in the container
      $ psql -d user-svc

    Next Steps: Read on here

    Using Development Containers

    You can start your IDE and Docker Compose together using Dev Containers.
    This will install a backend client for your IDE in the devcontainer container, which it uses instead of your local setup.

    Important

    Development Containers is still relatively new technology.
    Things are likely to change and/or break.

    Warning

    Even though containers are a somewhat isolated environment,
    the devcontainer has access to the host’s docker socket.
    This means malicious code can still mess with your system.
    Just like on your host system (assuming your user is in the docker group), everything could run as root, if it tried hard enough.
    For example by spawning a new container, which mounts / and runs rf -rf /.
    You can disable docker-out-of-docker by setting DOCKER_SOCK in dev‘s .env to /dev/null.

    Development Containers and IntelliJ (GoLand)

    Relevant Docs

    1. Start GoLand and close any open projects.
    2. Alternatively, start JetBrains Gateway.
    3. Make sure your docker setup is correctly configured
    4. Under “Remote Development” -> “Dev Containers” create a new Dev Container
    5. Copy in git@github.com:helpwave/services.git and select your branch of choice.
    6. Alternatively, click on “From Local Project” and navigate to a local copy of the repo’s .devcontainer/devcontainer.json
    7. Wait, follow further instructions, and select the editor you want to launch (likely GoLand)
    8. GoLand will start in a docker container, you can start hacking!
    9. Use the usual GoLand UI to open a terminal, where you will find a bunch of tools pre-installed for you.
    10. Next time, you can find the Dev Container ready to start.

    Next Steps: Read on here

    Development Containers and VSCode

    1. Clone the Repo and open it in VSCode
    2. On the lower left, click on the remote development icon
    3. Select “Reopen in Container”
    4. VSCode will restart in a docker container, you can start hacking!
    5. Use the VSCode UI to open a terminal, where you will find a bunch of tools pre-installed for you.
    6. Closing VSCode will not stop the docker compose stack. You can do that manually, close VSCode and navigate to dev, there, run docker compose down.

    Development

    Following the steps above, you should now have a docker compose stack running and are able to execute commands in the dev-devcontainer-1 container.

    To see what compose actually starts, we recommend a look into the docker-compose.yaml, but here is the gist:

    • Dependencies (e.g., Redis, Postgres and EventStore)
    • apisix
    • services
    • devcontainer

    Of particular interest are the services and devcontainer containers. The former starts all go services using air, a tool that automatically recompiles and restarts the service on reload.
    The latter gives you a playground for development with all the tools you need. As it is in the docker network you can use the names of the other (docker compose) services as hostnames (i.e., postgres will be resolved to the container in which the postgres instance is running on).
    It also has access to docker. This means you can, for example, read the services logs like this: docker compose logs services -f, and start / stop / restart containers.

    A lot of ports will be allocated on your host system for additional debugging. Have a look at the table below for more information.

    Networking

    The microservices are tied together using dapr. Most services provide a grpc API, as defined by the Protobufs in /proto.
    Here you can find the proto’s documentation.

    We provide one unified entrypoint for clients in form of the API-gateway APISIX. It exposes a grpc-web api for http-only clients (i.e., web), but falls back to a grpc proxy for those clients that support it (i.e., mobile).

    How do I talk to a service?

    Using helpwave/web

    The frontend is a great way to test regressions. Hop over to the web repo to find out how to set it up and point it to APISIX (HTTP).

    Using APISIX over grpc-web

    Using this approach, you are closest to the actual calls made by the frontends.

    TODO: We are still looking for a tool that works well for this :/

    Directly

    https://github.com/fullstorydev/grpcurl

    grpcurl --plaintext \
    	-H "Authorization: Bearer <...>" \
    	-d '{}' \
    	localhost:3004 \
    	services.tasks_svc.v1.WardService/GetWards

    Ports

    Service Port Protocol
    user-svc 3002 gRPC
    tasks-svc 3004 gRPC
    property-svc 3005 gRPC
    updates-svc 3006 gRPC
    APISIX (dapr) 3500 http
    APISIX (dapr) 35001 grpc
    APISIX 9080 http (mostly grcp-web)
    APISIX 9433 https (mostly grcp-web)
    APISIX 9090 http (control api)
    APISIX 9091 http (prometheus)
    postgres 5432
    redis 6379
    eventstore 2113
    jaeger (UI) 16686 http
    jaeger (OTLP) 4317 grpc
    SpiceDB 50051 grpc

    Scripts

    These scripts can be run on your local machine:

    • shell.sh – Opens a Shell in the devcontainer
    • up.sh – Starts the docker compose stack, accepts docker compose up arguments. Your IDE might do this for you!
    • down.sh – Shuts down the docker compose stack, accepts docker compose down arguments

    These scripts can be run in the devcontainer:

    • protos.sh: Lint and Generate protos
    • migrate.sh: migrate.sh
    • models.sh: models.sh
    • lint.sh: Lint go code
    • tidy.sh: Runs go mod tidy across the workspace. Recommended to run after modifying any dependencies.

    Example:

    $ ./shell.sh protos.sh
    
    # or
    $ ./shell.sh
    $ ./proto.sh

    From the shell, you can use go commands, like go test ./... in a service’s directory.
    Go tests are used both for unit tests and more complex (and time-consuming) integration tests simulating a user story.
    These spin up fresh containers for external dependencies, like EventStore, using testcontainers.
    It makes use of the same docker socket mounted to the development environment.

    Fake token

    When working with services that use auth, instead of dealing with JWTs, you can make use of fake tokens.
    A fake token is only the claims part of a JWT, which are defined here.
    You can create your own by simply base64-encoding your json claims.

    Use the environment variable INSECURE_FAKE_TOKEN_ENABLE=true for a service to accept fake-tokens.
    Only do this in development environments!

    Fake tokens are used just like JWTs: As bearer tokens in the Authorization header of your gRPC requests.

    Here is an example fake-token that can be used as the bearer authorization token.

    eyJzdWIiOiIxODE1OTcxMy01ZDRlLTRhZDUtOTRhZC1mYmI2YmIxNDc5ODQiLCJlbWFpbCI6Im1heC5tdXN0ZXJtYW5uQGhlbHB3YXZlLmRlIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsIm5hbWUiOiJNYXggTXVzdGVybWFubiIsInByZWZlcnJlZF91c2VybmFtZSI6Im1heC5tdXN0ZXJtYW5uIiwiZ2l2ZW5fbmFtZSI6Ik1heCIsImZhbWlseV9uYW1lIjoiTXVzdGVybWFubiIsIm9yZ2FuaXphdGlvbiI6eyJpZCI6IjNiMjVjNmY1LTQ3MDUtNDA3NC05ZmM2LWE1MGMyOGViYTQwNiIsIm5hbWUiOiJoZWxwd2F2ZSB0ZXN0In19
    ---
    # decoded
    {
      "sub": "18159713-5d4e-4ad5-94ad-fbb6bb147984",
      "email": "max.mustermann@helpwave.de",
      "email_verified": true,
      "name": "Max Mustermann",
      "preferred_username": "max.mustermann",
      "given_name": "Max",
      "family_name": "Mustermann",
      "organization": {
        "id": "3b25c6f5-4705-4074-9fc6-a50c28eba406",
        "name": "helpwave test"
      }
    }
    

    Docker images

    You can build a docker image for a service using make:

    make <servicename>

    e.g.:

     make user-svc

    Each service should have a Makefile, that includes the root’s Makefile.
    In the context of a service directory you should be able to use

     make this

    to build the corresponding docker image as well.

    Docker Images

    dev-go

    The dev-go image is a common development environment for go services. It includes go and a whole lot of tools and dependencies.
    When updating the Dockerfile, also increase the DEV_TAG variable in the up.sh file.

    dev-go-custom

    For this image we only provide a Dockerfile, you can not pull it from the registry.
    It is built by docker-compose using your uid and gid as build arguments to reduce friction when it comes to mounted files.

    The other function of this image is to allow customization of the dev container.
    Feel free to install custom tooling (e.g. zsh, nano, …) there.
    Please mind the instructions at the top of the Dockerfile though.
    If you think a tool might be useful to others, install it in the dev-go image instead.

    Dockerfile.apisix

    This image includes our APISX configuration and the daprd process. As in Dockerfile.standalone,
    these two processes are running side by side in the container via hivemind.

    Dockerfile.service

    This image contains “just” the service. The Dapr sidecar needs to be deployed separately.

    Dockerfile.standalone

    This image contains the service including the daprd sidecar. These two processes are running
    side by side in the container via hivemind.

    Migrations

    We use migrate to handle changes to our database tables
    It uses change versioning, e.g.

    • 000009_some_name.up.sql contains all changes from version 8 to 9
    • 000009_some_name.down.sql contains all changes from version 9 to 8 in reverse order from

    migrate -path services/<service>/migrations/ -database postgres://postgres:postgres@localhost:5432/<service>?sslmode=disable up [version]
    migrate -path services/<service>/migrations/ -database postgres://postgres:postgres@localhost:5432/<service>?sslmode=disable down [version]

    migrate.sh – Running migrate/migrate inside docker

    Recommended

    If you don’t want to install migrate/migrate directly on your machine, you can use our migrate.sh script.
    This script automatically discovers the migration folder and env file and passes them alongside
    you arguments into migrate/migrate running in a container.

    migrate.sh will use the database setup of the passed service. Therefore, a .env file is required.

    Usage

    ./migrate.sh <service> [... arguments for migrate/migrate]

    • <service> must be one of the services inside /services/
    • [... arguments for migrate/migrate] gets passed directly to migrate/migrate

    Examples

    • Migrate the tasks-svc database all the way up ./migrate.sh tasks-svc up
    • Migrate the tasks-svc database one down: ./migrate.sh tasks-svc down 1
    • Current migration version of the tasks-svc database: ./migrate.sh tasks-svc version

    desired

    Additionally, the script allows you to query the highest available migration version
    based on the migration files available in the service:

    ./migrate <service> desired

    ./models.sh – Generate models from SQL Queries

    Important

    Requires a running database, it’s best to run devenv up before running this script.

    We use sqlc, a compiler that generates go functions based on SQL queries and a schema.
    This script

    1. Makes sure the database is fully migrated up (see migration.sh)
    2. Generates a schema.sql using pg_dump
    3. Invokes sqlc to generate go code based on a sqlc.yaml

    Tip

    The schema.sql is also a great place to familiarize yourself with the data model(s)!

    Usage:

    • ./models.sh <some-svc> generates models for a specific service
    • ./models.sh models generates models for all services

    Enviroment variables

    Every service should contain a .env.example file which should be copied to a usable .env file

    Default Variables:

    # The mode used, options available
    #  development, production
    MODE=development
    
    # The following levels are available:
    #  trace, debug, info, warn, error, fatal, panic
    LOG_LEVEL=debug
    
    # The port on which the the service will listen on (default: 8080)
    PORT=8080
    
    # The address, where a metrics (prometheus) server will be started on,
    # if missing, no metrics server will be started!
    #   e.g: "0.0.0.0:2112", ":8080"
    METRICS_ADDR=
    
    # Database related configurations
    POSTGRES_HOST=localhost
    POSTGRES_PORT=5432
    POSTGRES_USER=postgres
    POSTGRES_PASSWORD=postgres
    POSTGRES_DB=postgres
    
    # EventStoreDB related configurations
    EVENTSTORE_CS=esdb://admin:changeit@localhost:2113?tls=false
    
    # For development purposes
    INSECURE_FAKE_TOKEN_ENABLE=true
    OAUTH_CLIENT_ID=425f8b8d-c786-4ff7-b2bf-e52f505fb588
    
    # Variables with the prefix "SECRETSTORE_" will be catched up by the current env secret store
    SECRETSTORE_REDIS_HOST=localhost
    SECRETSTORE_REDIS_USERNAME=
    SECRETSTORE_REDIS_PASSWORD=
    
    # Telemetry Related Configurations
    
    # skips opentelemetry setup
    OTEL_DISABLE=false
    
    # one of "zipkin", otlp", "otlp_http"
    # for additional configuration for each exporter see their documentation:
    # - otlp: https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc@v1.21.0
    # - otlp_http: https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp@v1.21.0
    # - zipkin: only provides OTEL_EXPORTER_ZIPKIN_ENDPOINT with default "http://localhost:9411/api/v2/spans"
    OTEL_TRACE_EXPORTER=otlp
    
    # the language to fallback to for users that
    # don't provide a (valid) language header
    # must be a valid BCP 47 string
    FALLBACK_LANGUAGE=en

    Temporary advises

    Example for service to service communication via Dapr and gRPC

    // ...
    
    daprClient := common.MustNewDaprGRPCClient()
    
    organizationSvc := user_svc.NewOrganizationServiceClient(daprClient.GrpcClientConn())
    ctx, cancel := common.PrepCtxForSvcToSvcCall(ctx, "user-svc")
    defer cancel()
    
    res, err := organizationSvc.CreateOrganizationForUser(ctx, &user_svc.CreateOrganizationForUserRequest{
    	LongName:     fmt.Sprintf("%s personal organization", payload.Nickname),
    	ShortName:    payload.Nickname,
    	ContactEmail: payload.Email,
    	IsPersonal:   true,
    	UserId:       userID.String(),
    })
    
    if err != nil {
    	return nil, status.Error(codes.Internal, err.Error())
    }

    Arm versioning

    We disarmed the versioning for pre-production..
    To arm the versioning, search for “arm-versioning” in the codebase.

    Archived Services

    Visit original content creator repository
    https://github.com/helpwave/services

  • extrinsic-promises

    Static Badge JavaScript Standard Style Guide

    extrinsic-promises

    Supports node versions from v6 up to v22, and 0 runtime dependencies

    Deprecated: The functionality provided by this module is now available through the built-in Promise object using the withResolvers function (MDN documentation link). As such, this module is being deprecated and will no longer be maintained. See below for migration patterns.

    extrinsic-promises is a JavaScript module that provides a convenient promises anti-pattern for those times when you just really need to settle (fulfill or reject) your promise from outside the promise’s work-function.

    Specifically, an ExtrinsicPromise is a thennable that you construct without a work-function, and instead call public fulfill and reject methods on the object to settle the state of the promise.

    Note: this is generally a promises antipattern. It is not recommended for most use cases, but there are some situations that can’t reasonably be handled with traditional promises (at least not without re-implementing extrinsic-promises.)

    Migrating from this package to built-in functionality

    As of 2023, the JavaScript standard defines a static function called withResolvers on the built-in Promise object. This function provides the same functionality as extrinsic-promises, and should be preferred going forward. The function has widespread browser support and is available by default in NodeJS as of v22. It is also available behind a feature flag as early as Node v21.7.1.

    While the interface is not a drop-in replacement, all the functionality is easily supported.

    Migration: Creating and settling an externally-settlable promise

    Old way:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const promise = new ExtrinsicPromise();
    promise.fulfill("some value");
    promise.reject(new Error("some reason"));

    New way:

    const { promise, resolve, reject } = Promise.withResolvers();
    resolve("some value");
    reject(new Error("some reason"));

    Migration: adopting a thennable

    Old way:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const promise = new ExtrinsicPromise();
    
    // ...thennable is a Promise-like object with a then/2 method...
    promise.adopt(thennable);

    New way:

    const { promise, resolve, reject } = Promise.withResolvers();
    
    // ...thennable is a Promise-like object with a then/2 method...
    thennable.then(resolve, reject);

    Migration: replacing the work method

    Old way:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const promise = new ExtrinsicPromise();
    
    const myWorkFunction = (fulfill, reject) => {
        // ... do some work and then calls either `fulfill` or `reject`.
    };
    promise.work(myWork);

    New way:

    const { promise, resolve, reject } = Promise.withResolvers();
    
    const myWorkFunction = (fulfill, reject) => {
        // ... do some work and then calls either `fulfill` or `reject`.
    };
    setImmediate(() => {
        try {
            myWorkFunction(resolve, reject);
        } catch (error) {
            reject(error);
        }
    });

    Or, new way:

    const myWorkFunction = (fulfill, reject) => {
        // ... do some work and then calls either `fulfill` or `reject`.
    };
    
    const promise = new Promise(myWorkFunction);

    Migration: Hiding the Extrinsic methods

    Old way:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const exPromise = new ExtrinsicPromise();
    const p = exPromise.hide();
    p.fulfill; // undefined
    p.reject; // undefined
    p.adopt; // undefined
    p.work; // undefined
    p.hide; // undefined

    When creaeting a promise using withResolvers, there’s no need to hide anything, since the "promise" property is already just a plain Promise.

    New way:

    const { promise: p } = Promise.withResolvers();
    p.fulfill; // undefined
    p.reject; // undefined
    p.adopt; // undefined
    p.work; // undefined
    p.hide; // undefined
    ```
    
    ## Installation
    
    ```console
    npm install --save extrinsic-promises

    Example

    Basic usage:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const promise = new ExtrinsicPromise();
    
    // Setup handlers for the promise, just like you normally would.
    promise.then(value => {
        console.log("Promise was fulfilled with value:", value);
    });
    
    // Call the public methods on the promise to fulfill/resolve it.
    promise.fulfill("Some value");

    Rejecting a promise:

    const promise = new ExtrinsicPromise();
    
    // Register your on-reject handler for the promise,
    // just like you normally would.
    promise.then(null, reason => {
        console.log("Promise was reject for reason:", reason);
    });
    
    // Call the public methods on the promise to reject it.
    promise.reject(new Error("some reason"));

    Getting an Extended API

    The ExtrinsicPromise only provides the basic .then(onFulfill, onReject) method for promises. If you want the convenience methods provided by your favoritate promises library, you can usually use that library to wrap an ExtrinsicPromise appropriately:

    import Promise from "bluebird";
    import ExtrinsicPromise from "extrinsic-promises";
    
    const exPromise = new ExtrinsicPromise();
    const bluebirdPromise = Promise.fulfill(exPromise);

    Or, if the library doesn’t provide a method like that, you can use the standard Promise constructor as follows:

    import ExtrinsicPromise from "extrinsic-promises";
    
    const exPromise = new ExtrinsicPromise();
    const otherPromise = new Promise((fulfill, reject) => {
        exPromise.then(fulfill, reject);
    });

    API

    The ExtrinsicPromise class exports the following public methods:

    ExtrinsicPromise::then(onFulfill[, onReject])

    The standard then method of the Promises/A+ standard, used to register an on-fulfill and/or on-reject handler for the promise.

    ExtrinsicPromise::fulfill([withValue])

    Resolve (fulfill) the ExtrinsicPromise with the optional given value. Note that there is no gaurantee as to when fulfillment occurs (i.e., synchronously or asynchronously).

    This method is already bound and can be used correctly as a function reference. E.g.,:

    const exPromise = new ExtrinsicPromise();
    const fulfillLater = exPromise.fulfill;
    // ...
    fulfillLater(value); // correctly fulfills exPromise.

    ExtrinsicPromise::reject([forReason])

    Reject the ExtrinsicPromise with the optional given reason (typically, an Error object). Note that there is no gaurantee as to when rejection occurs (i.e., synchronously or asynchronously).

    This method is already bound and can be used correctly as a function reference. E.g.,:

    const exPromise = new ExtrinsicPromise();
    const rejectLater = exPromise.reject;
    // ...
    rejectLater(reason); // correctly rejects exPromise.

    ExtrinsicPromise::adopt(thennable)

    Adopt the state of the given thennable, once the thennable settles, if this extrinsic promise has not already settled. This is a convenience for using this extrinsic promise’s fulfill and reject methods as the on-fulfill and on-reject handlers, respectively, of the given thennable, as follows:

    const exPromise = new ExtrinsicPromise();
    thennable.then(exPromise.fulfill, exPromise.reject);

    ExtrinsicPromise::work(workfunction)

    An alternative interface for settling the promise, this allows you to pass in a work-function just like you normally would pass to the Promise constructor, but in this case you’re passing it in after the promise has already been constructed.

    The given work function will be invoked unconditionally (even if the promise is already settled) with two arguments, typically called fulfill and reject. These are functions that are used to settle the state of the promise once the work you promise to do is done, just like the .fulfill() and .reject() methods on the ExtrinsicPromise.

    If an error is thrown inside the workfunction, it will be treated as a rejection.

    Note that the work function will be called asynchronously, i.e., the call to .work() will return before the given work function has been called.

    ExtrinsicPromise::hide()

    Returns a minimal thennable object which only exposes the .then() method of this object as a bound function. This allows you to pass around this object as a promise, without exposing it’s state-mutating methods like .fulfill() and .reject().

    How Does it Work?

    It’s pretty simple, feel free to read the code. There’s a few details necessary to avoid race conditions, but the gist of it is to simply save the fulfill and reject signalling functions that the promise passes in to the work function:

    constructor () {
      new Promise((fulfill, reject) => {
        this.fulfill = fulfill
        this.reject = reject
      })
    }
    Visit original content creator repository https://github.com/mearns/extrinsic-promises
  • dashboard

    AmazonJeff

    Menu

    Sobre o projeto

    Funcionalidades

    Tecnologias

    Preview

    Layout

    Como contribuir

    Contato

    Siga-me


    📖 Sobre o projeto

    Esse projeto está sendo desenvolvido durante a 4ª edição da Semana Spring React, evento organizado pela DevSuperior

    🚧 In development 🚧

    O mesmo é um dashboard das vendas de uma loja

    📝 Funcionalidades


    🚀 Tecnologias

    Este projeto foi desenvolvido com as seguintes tecnologias:

    Frontend Web

    Backend


    👀 Preview


    🎨 Layout


    🌎 Implantação em produção

    Back-end:

    Front-end WEB: Netlify

    🔥 Como usar

    🏳️ Condições prévias

    Você precisa ter instalado:

    • NodeJS
    • JAVA 11
    • Package manager (NPM ou Yarn)

    💾 Clone esse repositório

    $ git clone https://github.com/ojeffpinheiro01/dashboard.git

    📁 Entre na pasta

    $ cd dashboard

    🔧 Instale as dependências

    cd backend && yarn install
    cd frontend && yarn install

    🌐 Rode o servidor

    • Instale e configure o banco de dados postgreSQL

    • Importe como projeto Maven em sua IDE Java

    • Crie as tabelas com o código no arquivo data.sql em resources

    • Preencha as informações dos arquivos application.properties

    • Execute como aplicação Spring Boot

    💻 Front-end WEB

    Entre na pasta

    $ cd frontend

    Rode a aplicação

    $ yarn start

    ⚙️ Como contribuir

    - Faça um fork desse repositório;
    - Crie uma branch com a sua feature: git checkout -b minha-feature;
    - Faça commit das suas alterações: git commit -m 'feat: Minha nova feature';
    - Faça push para a sua branch: git push origin minha-feature;

    📫 Contato

    Linkedin Badge Gmail Badge

    Siga-me

    ojeffoinheiro maykbrito maykbrito maykbrito maykbrito ojeffoinheiro


    Este projeto foi desenvolvido com ❤️ por Jéferson Pinheiro, com o instrutor Nelio Alves durante a Semana Spring React 04
    Se te ajudou, dá ⭐, vai me ajudar também 😉

    Visit original content creator repository https://github.com/ojeffpinheiro01/dashboard
  • mojang-blocklist

    sudofox/mojang-blocklist

    In September of 2022 I decided to try to identify the strings for all of the entries in Mojang’s server blocklist. Through many different methods and approaches, including:

    • bruteforcing
    • pulling domains from server lists
    • contextual analysis
    • interviews with former server owners
    • historical research
    • relying on the work of people who’ve come before me
    • assistance from various cool people

    I was able to identify many new strings in the list.

    There’s some GitHub automation in place to automatically update everything every couple of hours.

    How to help

    Obviously, we want to continue to identify more strings. That’s really it. If you are able to identify more, please open a PR or an issue!

    Take a look at data/todo.txt for hashes that have yet to be cracked.

    If you run a Minecraft server list site, you are exactly the kind of person we’re looking for that has the resources to help!

    How to use this stuff

    • data/current.txt contains the current blocklist, as fetched from https://sessionserver.mojang.com/blockedservers
    • data/identified.txt contains all strings which I’ve identified since starting the project, in the format hash=string. It also includes ones that have been since removed from the blocklist.
    • data/merged.txt contains the current blocklist but with identified strings merged in. This is the most useful file to use for contextual analysis.

    Adding new stuff

    See scratchwork.md for various neat and useful snippets.

    Throw whatever you want at node try_url.js. If you find something new, run this stuff:

    npm run update-blocklist ; npm run update-merged; npm run update-todo
    

    For some reason, update-todo sometimes fails on certain systems, no clue why, but you can just manually run the comm command in package.json instead.

    Don’t prune identified strings that have been removed from identified.txt — I’m keeping them in there for historical purposes. I might end up adding a separate file for removed strings at some point which would include verified former blocklist entries.

    Take a look at the expandmc bash function in scratchwork.md

    Background information on the blocklist

    Beginning at Minecraft 1.9.3 r2, Mojang started blocking certain Minecraft servers using a blocklist.

    From wiki.vg’s documentation:

    Clients check the lowercase name, using the ISO-8859-1 charset, against this list. They will also attempt to check subdomains, replacing each level with a *. Specifically, it splits based off of the . in the domain, goes through each section removing one at a time. For instance, for mc.example.com, it would try mc.example.com, *.example.com, and *.com. With IP addresses (verified by having 4 split sections, with each section being a valid integer between 0 and 255, inclusive) substitution starts from the end, so for 192.168.0.1, it would try 192.168.0.1, 192.168.0.*, 192.168.*, and 192.*.

    This check is done by the bootstrap class in netty. The default netty class is overridden by one in the com.mojang:netty dependency loaded by the launcher. This allows it to affect any version that used netty (1.7+)

    The blocklist appears to have three classes of entries:

    Hostnames

    This includes wildcards, typos, ‘?’ appended, mixed case, and other anomalies. Minecraft servers don’t need to know their own hostname to function, so scanning the Internet for servers isn’t very useful (except maybe with rDNS). Checking server lists is a decent way to find them though.

    There used to be a way to bypass blocks via rotating SRV records, but this was patched in snapshot 21w13a (Mojira issue MC-136551, “Servers able to bypass EULA blacklist.”)

    IPs

    This includes RFC1918 IPs. May also include naive classful wildcards, (192.168, etc.).
    This set of hashcat masks for all valid IP addresses can be run after every new hash is added:

    https://github.com/johnjohnsp1/hexhosts/blob/master/ipv4.hcmask

    There aren’t many of these. Mojang does not block many IPs in the list. From some of our research, we found that they used to but after one notable incident where they blocked ProxyPipe, they generally shifted direction towards blocking hostnames.

    Test entries

    These are not valid DNS FQDNs, or even hostnames (some have spaces, underscores, etc). These often have “dns” and/or “test” in them, with various combinations of separators (including space), case, and appended digits.

    Thanks

    Special thanks to:

    • @roycewilliams – who has provided a lot of help with identifying various strings through research.
    • All the people who have put in work to identify hashes in the past (e.g @Reecepbcups, @theminecoder)
    • Various people who have let me look at their data (even if it didn’t identify (m)any new hashes) like @Yive and some server list owners

    Visit original content creator repository
    https://github.com/sudofox/mojang-blocklist

  • reading-list

    Visit original content creator repository
    https://github.com/suraj-davariya/reading-list

  • ontologist

    ontologist

    Release Build status License

    A Python library for validating RDF data alignment with ontologies without requiring shape resources.

    Why?

    When working with Large Language Models (LLMs) to extract RDF data based on ontologies, it’s crucial to verify that the extracted data aligns correctly with the target ontology. While tools like pySHACL or PyShEx exist for RDF validation, they may require additional shape resources, or may fail on certain validation checks.

    This library provides a programmatic approach to verify ontology alignment, making it particularly suitable for:

    • Validating LLM-extracted RDF data
    • Working with ontologies that lack shape definitions
    • Getting detailed violation reports for debugging and improvement

    Installation

    pip install ontologist

    Quick Start

    from rdflib import Graph
    from ontologist import validate
    
    # Load your ontology and data graphs
    data = Graph().parse("your_data.ttl")
    ontology = Graph().parse("your_ontology.ttl")
    
    # Validate the data
    is_valid, violations, report = validate(data, ontology)
    
    print(report)
    Validation Report
    Conforms: False
    Results (1):
    PROPERTY type violation:
    	Property 'ex:Prop1' of instance 'ex:Class1' can't have value of type 'http://www.w3.org/2001/XMLSchema#string' because it requires type 'http://www.w3.org/2001/XMLSchema#integer'.
    
    
    Visit original content creator repository https://github.com/atomobianco/ontologist
  • ontologist

    ontologist

    Release Build status License

    A Python library for validating RDF data alignment with ontologies without requiring shape resources.

    Why?

    When working with Large Language Models (LLMs) to extract RDF data based on ontologies, it’s crucial to verify that the extracted data aligns correctly with the target ontology. While tools like pySHACL or PyShEx exist for RDF validation, they may require additional shape resources, or may fail on certain validation checks.

    This library provides a programmatic approach to verify ontology alignment, making it particularly suitable for:

    • Validating LLM-extracted RDF data
    • Working with ontologies that lack shape definitions
    • Getting detailed violation reports for debugging and improvement

    Installation

    pip install ontologist

    Quick Start

    from rdflib import Graph
    from ontologist import validate
    
    # Load your ontology and data graphs
    data = Graph().parse("your_data.ttl")
    ontology = Graph().parse("your_ontology.ttl")
    
    # Validate the data
    is_valid, violations, report = validate(data, ontology)
    
    print(report)
    Validation Report
    Conforms: False
    Results (1):
    PROPERTY type violation:
    	Property 'ex:Prop1' of instance 'ex:Class1' can't have value of type 'http://www.w3.org/2001/XMLSchema#string' because it requires type 'http://www.w3.org/2001/XMLSchema#integer'.
    
    
    Visit original content creator repository https://github.com/atomobianco/ontologist
  • pleque

    PLEQUE – PLasma EQUilibrium Enjoyment module [pleɪɡ]

    GitHub py3comp

    PLEQUE is a Python module allowing simple visualisation and manipulation of tokamak plasma equilibria. For more information see the documentation at https://pleque.readthedocs.io.

    Note: The work is still in the early development stage, so pleque probably contains bugs. You are very welcome to submit your wishes, encountered bugs or any other comments as an issue. Minor changes in the code structure may occur before the 0.1.0 release.

    Getting Started

    Prerequisites

    The following packages are required to install pleque:

    python>=3.5
    numpy
    scipy
    shapely
    scikit-image
    xarray
    pandas
    h5py
    omas
    

    They should be automatically handled by pip further in the installation process.

    Download the source code

    First, pick where you wish to install the code:

     cd /desired/path/

    There are two options how to get the code: from PyPI or by cloning the repository.

    pip install --user pleque

    Alternatively, you may use the unstable experimental release (probably with more fixed bugs):

     pip install --user -i https://test.pypi.org/simple/ pleque

    Clone the github repository

    git clone https://github.com/kripnerl/pleque.git
    cd pleque
    pip install --user .

    Congratulations, you have just installed pleque!

    Examples

    The following example shows how to load an equilibrium saved in the eqdsk format. The equilibrium used here comes from a FIESTA simulation of the COMPASS-Upgrade tokamak.

    from pleque.io import readers
    import pkg_resources
    import matplotlib as plt
    
    #Locate a test equilibrium
    filepath = pkg_resources.resource_filename('pleque', 'resources/baseline_eqdsk')

    The heart of pleque is its Equilibrium class, which contains all the equilibrium information (and much more). Typically its instances are called eq.

    # Create an instance of the `Equilibrium` class
    eq = readers.read_geqdsk(filepath)

    The Equilibrium class comes with tons of interesting functions and caveats.

    # Plot a simple overview of the equilibrium
    eq.plot_overview()
    
    # Calculate the separatrix area
    sep_area = eq.lcfs.area
    
    # Get absolute magnetic field magnitude at given point
    R = 0.7 #m
    Z = 0.1 #m
    B = eq.B_abs(R, Z)

    Equilibria may be visualised in many different ways; they may be used for mapping or field line tracing; the possibilities are virtually endless. If there’s a caveat you find missing from pleque, write to us! Further examples can be found as notebooks in the notebooks folder or in the examples directory.

    Version

    0.0.5

    Authors

    See also the list of contributors who participated in this project.

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Related projects

    • FreeGS – free boundary Grad-Shafranov solver in Python.
    • OMFIT is an integrated modeling and experimental data analysis software for magnetically confined thermonuclear fusion experiments. The goal of OMFIT is to enhance existing scientific workflows and enable new integrated modeling capabilities. To achieve these goals OMFIT adopts a bottom-up collaborative development approach.
    • OMAS (Ordered Multidimensional Array Structure) is a Python library designed to simplify the interface of third-party codes with the ITER Integrated Modeling and Analysis Suite (IMAS) . ITER IMAS defines a data model, a data get/put API, and a data storage infrastructure used for manipulating ITER data.

    References

    Visit original content creator repository https://github.com/kripnerl/pleque
  • orb

    Orb

    Orb UI Preview

    Go Report Card CodeCov Go Reference

    Orb is a modern network observability platform built to provide critical visibility into increasingly complex and distributed networks. It can analyze network traffic, run synthetic network probes, and connect the resulting telemetry directly to your existing observability stacks with OpenTelemetry. Orb differentiates from other solutions by pushing analysis close to the traffic sources (reducing inactionable metrics and processing costs), and allows for dynamic reconfiguration of remote agents in real time.

    Ready to dive in? See orb.community for installation instructions.

    Why Orb?

    Distributed Deep Network Observability

    Orb manages a fleet of agents deployed across distributed, hybrid infrastructure: containers, VMs, servers, routers and switches. The agent taps into traffic streams and extracts real time insights, resulting in light-weight, actionable metrics.

    Streaming Analysis at the Edge

    Based on the pktvisor observability agent, Orb’s goal is to push analysis to the edge, where high resolution data can be analysed in real time without the need to send raw data to a central location for batch processing. Current analysis focuses on L2-L3 Network, DNS, and DHCP with more analyzers in the works.

    Realtime Agent Orchestration

    Orb uses IoT principles to allow the observability agents to connect out to the Orb central control plane, avoiding firewall problems. Once connected, agents are controlled in real time from the Orb Portal or REST API, orchestrating observability policies designed to precisely extract the desired insights. Agents are grouped and addressed based on tags.

    Flexible Integration With Modern Observability Stacks

    Orb was built to integrate with modern observability stacks, supporting OpenTelemetry natively. Collection and sinking of the metrics from the agents is included; there is no need to run additional data collection pipelines for Orb metrics.

    Portal and REST API Included

    Orb includes a modern, responsive UI for managing Agents, Agent Groups, Policies and Sinks. Orb is API first, and all platform functionality is available for automation via the well documented REST API.

    Open Source, Vendor Neutral, Cloud Native

    Orb is free, open source software (FOSS) released under MPL. It’s a modern microservices application that can be deployed to any Kubernetes service in private or public cloud. It does not depend on any one vendor to function, thus avoiding vendor lock-in.


    Backed by NetBox Labs

    Orb was born at NetBox Labs, where we’re committed to making open source, dynamic edge observability a reality .


    Visit original content creator repository https://github.com/orb-community/orb
  • contests-v1

    Contests web application

    This is a web application to manage contests, user’s login/registration and participation in contests uploading audios.

    Version 1

    • Built on Ruby and Ruby on Rails.
    • Bootstrap as a front-end framework.
    • Uses local storage to store images and audios.
    • Uses a relational database.

    Assumptions

    This application was created using:

    • Ruby 2.5.3
    • Rails 5.2.2
    • Ubuntu 18.4
    • MySQL 14.14

    Prerequisites

    1. Install Ruby on Rails, follow this link.
    2. Install Git, follow this link.
    3. Install MySQL, follow this link.

    Instructions

    1. Configure these environment variables:

    Environment variable Example Description
    CONTESTS_SERVER_URL http://localhost:8000/ The server URL where web application is deployed
    CONTESTS_DATABASE_HOST localhost or 127.0.0.1 The database host
    CONTESTS_DATABASE_USER root The database user
    CONTESTS_DATABASE_PASSWORD 12345 The database password
    CONTESTS_DATABASE_NAME contests The database name
    CONTESTS_PATH_IMAGES /home/images/ The path for images
    CONTESTS_PATH_ORIGINAL_FILES /home/audio/original/ The path where ORIGINAL audio files will be stored
    CONTESTS_PATH_CONVERTED_FILES /home/audio/converted/ The path where CONVERTED audio files will be stored

    1. Go to bin/ folder.

    2. Install the MySQL gem and dependencies:

    sudo apt-get install libmysqlclient-dev
    gem install mysql2
    1. Install all the necessary gems and update the existing ones:

    gem update
    gem install rails
    bundle install
    1. Create database model:
    rails db:migrate
    1. Run the server:
    rails server -p 8000
    1. Go to your favorite web browser and type:
    http://localhost:8000/
    1. Enjoy.

    Visit original content creator repository
    https://github.com/juanalvarez123/contests-v1