Groq Desktop Beta: A Game-Changer for MCP Support

Groq has just released Groq Desktop in beta mode (https://github.com/groq/groq-desktop-beta), and I’ve had the chance to try it out. What caught my attention was its impressive MCP support, which seems to outshine Claude Desktop. Here are three reasons why:

  1. YOLO Mode: Groq Desktop allows you to accept tool execution without asking questions, making the process smoother.
  2. On-the-fly Server Reload: Unlike Claude, where you need to restart the app to reload MCP servers, Groq Desktop lets you do it seamlessly.
  3. Hot Enable/Disable MCP Servers: You can enable or disable MCP servers on the fly, without needing to reload Groq Desktop.

These features make Groq Desktop a strong contender in the MCP support arena. Have you tried it out? What are your thoughts?

Remarkable Pro / Hackers gonna hack

Tengo una remarkable pro. Quiero conseguir 3 cosas:
1) Poder acceder via ssh
2) Poder sincronizar mis archivos usando mi propio cloud. Solución: rmfakecloud (no el cloud privativo de pago de remarkable)
3) Poder usar el pen de la rM como el pen de una tablet. Solución: remarkable-mouse

  1. https://support.remarkable.com/s/article/Developer-mode
  2. Instalar:

3. Instalar: https://github.com/Evidlo/remarkable_mouse

  • Lanzar así: python -m remarkable_mouse –key ~/.ssh/id_rsa_remarkable

    How to understand new frameworks (like OpenAI Agents SDK) using an LLM

    Recently, OpenAI published its framework https://openai.github.io/openai-agents-python (A lightweight, powerful framework for multi-agent workflows). I wanted to try it out, but I didn’t have much time… so I resorted to a new technique I’ve been using lately to do quick tests on new frameworks I want to explore.

    1. Access the online documentation: https://openai.github.io/openai-agents-python/
    2. Open Firecrawl.dev. This application will allow us to crawl a website, extracting the main text into markdown or json format. The idea is to collect all the HTML documentation of the framework to be explored in plain text.
      • I have indicated that it should not include pages that contain the path ref/.+ to avoid overloading the LLM with extra context (I’ll explain this in a second)

    3. Download the results:

    4. Unpack and check:

    5. We attach the .md as context to Claude. We can do it by drag&drop or by concatenating all in a single file using cat *.md > documentation.md and uploading that single file.

    6. The prompt: Read the following info about how to create an agent with OpenAI Agents SDK. I want to create an agent that knows how to fetch info from a webpage. We can use a python function that internally uses request

    7. Claude got it right on the first try, and was able to generate an agent, using the new OpenAI Agents Framework, that we can use to ask questions about any website.

    TIL: how to install iptv simple client in Kodi + webOS

    I had been thinking for a while that I would take advantage of Christmas to install Kodi on my LG TV (webOS). So yesterday I got to work on it.

    The installation is straightforward by following this mini-tutorial: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_webOS. The problem arose when trying to install the IPTV Simple Client. This client has several dependencies, including inputstream.ffmpegdirect and inputstream.rtmp, which are not available in the Kodi repository for LG. It is necessary to download and install the pre-compiled binaries.

    Just download the binaries from this repo:

    https://github.com/satgit62/pvr.hts-tvhead-client-on-LG-webOS?tab=readme-ov-file

    (or build them if you feel adventurous)

    Then, use webOS Dev Manager to upload the zips to ~/apps/usr/palm/applications/org.xbmc.kodi/addons. Unzip them:

    $ unzip inputstream.ffmpegdirect.zip
    $ unzip inputstream.rtmp.zip

    Reboot Kodi. Accept the messages that Kodi will display indicating that it has discovered two new add-ons. Open the settings of IPTV Simple Client.

    When this add-on is enabled, it displays all channels it can make available from the M3U file you specified during the configuration step, under the menu option ‘PVR & Live TV’ (you can’t «run» the add-on like you do with the others). Since it starts automatically when Kodi starts, there is no ‘run’ option. When the add-on is configured with a working channel list (*.m3u), it will scan that list and display all available channels in the main window.

    Bonus: TIL: you can use CanI.RootMy.TV to find an exploit for your TV.

    Monitorizando logs de LLMs con litellm y langfuse

    Contexto: has implementado o estás usando una aplicación web que internamente hace llamadas vía API a un LLM (GPT, Claude, LLama, whatever). Quieres analizar cuáles son los prompts que dicha aplicación está enviando. Necesitarás litellm (para que haga de proxy entre tu aplicación y el LLM) y langfuse, que recibirá un callback y te mostrará gráficamente todos los prompts. La idea es que litellm enviará automáticamente a langfuse una copia de cada lllamada al LLM (y de su respuesta) para que luego las puedas visualizar cómodamente.

    Receta rápida:

    Instala las dependencias necesarias

    $ pip install litellm 'litellm[proxy]' prisma langfuse

    Usa el fichero de configuración de litellm del que ya hablamos en su día en ikasten.io.

    Instala Postgresql, por ejemplo, a través de docker. Para ello, usa el siguiente docker-compose.yaml:

    version: '3'
    services:
    db:
    image: postgres
    restart: always
    environment:
    POSTGRES_DB: litellm
    POSTGRES_USER: llmproxy
    POSTGRES_PASSWORD: dbpassword9090
    healthcheck:
    test: ["CMD-SHELL", "pg_isready -d litellm -U llmproxy"]
    interval: 1s
    timeout: 5s
    retries: 10
    ports:
    - "5432:5432"

    (cambia el password dbpassword9090 como quieras). Postgresql es necesario para que litellm guarde información de los modelos. Necesitarás también crear un schema para postgresql a través de Prisma.

    Copia el fichero schema.prisma del repositorio GitHub de litellm:

    https://github.com/BerriAI/litellm/blob/main/schema.prisma

    $ prisma generate

    Lanza litellm :

    $ DATABASE_URL="postgresql://llmproxy:dbpassword9090@localhost:5432/litellm" STORE_MODEL_IN_DB="True" LITELLM_MASTER_KEY=sk-12345  LITELLM_SALT_KEY="saltkey1234" litellm --config ./config.yaml 

    Abre el panel de administración gráfica de litellm en localhost:4000/ui

    Por defecto, login: admin, pass: el master_key que hayas definido. En el ejemplo: sk-12345.

    Pulsa en Logging&Alerts -> Add Callback

    Elige langfuse y a continuación, teclea los parámetros indicados:

    Puedes obtener el public_key y secret_key creando una cuenta gratuita en langfuse. https://cloud.langfuse.com/

    Crea un proyecto y obtén los api keys:

    Prueba a pulsar en el Test Callback y deberías ver a los pocos segundos una nueva entrada de test en los logs de langfuse:

    Más info:

    https://robert-mcdermott.medium.com/centralizing-multi-vendor-llm-services-with-litellm-9874563f3062