diff --git a/docs/feed.rss b/docs/feed.rss index 468ffe2ea..87af6784c 100644 --- a/docs/feed.rss +++ b/docs/feed.rss @@ -9,6 +9,86 @@ https://identosphere.com Personal blogs of individuals working in decentralized identity. en-us + + Generative AI – The Power and the Glory + https://simonwillison.net/2025/Jan/12/generative-ai-the-power-and-the-glory/#atom-everything + Generative AI – The Power and the Glory +Michael Liebreich's epic report for BloombergNEF on the current state of play with regards to generative AI, energy usage and data center growth. + I learned so much from reading this. If you're at all interested in the energy impact of the latest wave of AI tools I recommend spending some time with this article. + Just a few of the points that stood out to + Sun, 12 Jan 2025 01:51:46 +0000 + 2025-01-12T01:51:46+00:00 + + + ECDSAに対応したゼロ知識証明の論文がGoogleから出ています + https://idmlab.eidentity.jp/2025/01/ecdsagoogle.html + こんにちは、富士榮です。 AAMVAのモバイル運転免許証のガイドラインでも触れましたが、mdocやSD-JWTのリンク可能性へ対応するためには今後ゼロ知識証明が大切になります。 年末にGoogleの研究者が Anonymous credentials from ECDSA というタイトルでペーパーを出しています。 https://eprint.iacr.org/2024/2010 AIでイラスト生成すると色々とおかしなことになって面白いですねw アブストラクトの中からポイントを抜粋すると、従来のBBS+では暗号スイートへの対応に関する要件が厳しかったのでレガシーで対応できるようにECDSAでもできるようにしたよ、ということのようですね。 Part of the difficulty arises because schemes in the literature, + Sun, 12 Jan 2025 00:23:00 +0000 + 2025-01-12T00:23:00+00:00 + + + What Are Stories? + https://doc.searls.com/2025/01/11/what-are-stories/ + Eighteenth in the New Commons series. Several generations ago, my pal Jerry and I were cutting a hole between the ceiling studs of a rented house in Durham, North Carolina. This was our first step toward installing a drop-down stairway to an attic space that had been closed since the house, a defunct parsonage for […] + Sat, 11 Jan 2025 23:54:29 +0000 + 2025-01-11T23:54:29+00:00 + + + Agents + https://simonwillison.net/2025/Jan/11/agents/#atom-everything + Agents +Chip Huyen's 8,000 word practical guide to building useful LLM-driven workflows that take advantage of tools. + Chip starts by providing a definition of "agents" to be used in the piece - in this case it's LLM systems that plan an approach and then run tools in a loop until a goal is achieved. I like how she ties it back to the classic Norvig "thermostat" model - where an agent is "anythi + Sat, 11 Jan 2025 17:50:12 +0000 + 2025-01-11T17:50:12+00:00 + + + Aviation vs. Fire + https://doc.searls.com/2025/01/11/aviation-vs-fire/ + 3:22pm—Hats off to Miles Archer for the links below, one of which goes here— —showing all the aircraft and their paths at once. You can start here at https://globe.adsbexchange.com/, which is kind of your slate that’s blank except for live aircraft over the Palisades Fire: Meanwhile all the media are reporting one home loss, in […] + Sat, 11 Jan 2025 16:52:34 +0000 + 2025-01-11T16:52:34+00:00 + + + Building an open web that protects us from harm + https://benwerd.medium.com/building-an-open-web-that-protects-us-from-harm-40a95da1d82f?source=rss-3b16402f5b9d------2 + It’s not enough to be neutral. We must be allies. Continue reading on Medium » + Sat, 11 Jan 2025 15:58:50 +0000 + 2025-01-11T15:58:50+00:00 + + + Building an open web that protects us from harm + https://werd.io/2025/building-an-open-web-that-protects-us-from-harm + + We live in a world where right-wing nationalism is on the rise and many governments, including the incoming Trump administration, are promising mass deportations. Trump in particular has discussed building camps as part of mass deportations. This question used to feel more hypothetical than it does today. Faced with this reality, it’s worth asking: who would stand by you if this kind of authori + Sat, 11 Jan 2025 15:58:14 +0000 + 2025-01-11T15:58:14+00:00 + + + The Good, The Bad, And The Stupid In Meta’s New Content Moderation Policies + https://werd.io/2025/the-good-the-bad-and-the-stupid-in-metas-new + + + [Mike Masnick in TechDirt] Mark Zuckerberg is very obviously running scared from the incoming Trump administration: "Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last + Sat, 11 Jan 2025 14:52:19 +0000 + 2025-01-11T14:52:19+00:00 + + + Mullenweg Shuts Down WordPress Sustainability Team, Igniting Backlash + https://werd.io/2025/mullenweg-shuts-down-wordpress-sustainability-team-igniting-backlash + + + [Rae Morey at The Repository] The bananas activity continues over at Automattic / Matt Mullenweg's house: "Members of the fledgling WordPress Sustainability Team have been left reeling after WordPress co-founder Matt Mullenweg abruptly dissolved the team this week. [...] The disbandment happened after team rep Thijs Buijs announced in Making WordPress Slack on Wednesday tha + Sat, 11 Jan 2025 14:33:59 +0000 + 2025-01-11T14:33:59+00:00 + + + AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b + https://m-ruminer.medium.com/ai-rag-with-llamaindex-local-embedding-and-ollama-llama-3-1-8b-b0620116a715?source=rss-7e85224c0a32------2 + In this post, I cover using LlamaIndex LlamaParse in auto mode to parse a PDF page containing a table, using a Hugging Face local embedding model, and using local Llama 3.1 8b via Ollama to perform naive Retrieval Augmented Generation (RAG). That’s a mouthful. I won’t go into how to setup Ollama and Llama 3.1 8b; this post assumes it is running. First off, you can find the code for this in m + Sat, 11 Jan 2025 14:31:06 +0000 + 2025-01-11T14:31:06+00:00 + Palisades Fire on the Ridge https://doc.searls.com/2025/01/10/palisades-fire-on-the-ridge/ @@ -30,7 +110,7 @@ This explains why I was seeing weird <|im_end|> suffexes during my The Los Angeles Media Dashboard https://doc.searls.com/2025/01/10/the-los-angeles-media-dashboard/ - Seventeenth in the News Commons series. While I’ve been writing about the #LAfires, this has been my main media dashboard: Those are tabs for five TV stations, one radio station, and one newspaper: KNBC/4 “4 Los Angeles” KTLA/5 “LA’s Very Own” KABC/7  “7 Eyewitness News” KCAL/9 “KCAL NEWS CBS Los Angeles” KTTV/11 “Fox 11 Los […] + Seventeenth in the News Commons series. That collection of tabs is my dashboard of major media that inform my writing about the #LAfires. There are tabs for five TV stations, one radio station, and one newspaper: KNBC/4 “4 Los Angeles” KTLA/5 “LA’s Very Own” KABC/7  “7 Eyewitness News” KCAL/9 “KCAL NEWS CBS Los Angeles” KTTV/11 […] Fri, 10 Jan 2025 19:08:55 +0000 2025-01-10T19:08:55+00:00 @@ -160,7 +240,7 @@ Here's the official release of Microsoft's Phi-4 LLM, now officially under an MI OYO AI http://www.moxytongue.com/2025/01/oyo-ai.html -  With over 1000 AI communities deployed in 2024, our Next AI cohort begins.. Coding, Computer Science, Artificial Intelligence, and Entrepreneurial Skill Development, With mentor support of all learners, students and teachers. Unlimited learning by design  By kidOYO at OYOclass.com  +  With over 1000 AI communities deployed in 2024, the next AI cohort begins.. Coding, Computer Science, Artificial Intelligence, Entrepreneurial Skill Development, Teacher PD,  With mentor support of all learners, students and teachers. Unlimited learning by design: (own root)  OYO®  AI  by kidOYO®  at OYOclass.com, Educati Wed, 08 Jan 2025 12:59:00 +0000 2025-01-08T12:59:00+00:00 @@ -636,16 +716,6 @@ Discovered on 12th October 2024 by the Great Internet Mersenne Prime Search. The Thu, 02 Jan 2025 07:39:50 +0000 2025-01-02T07:39:50+00:00 - - Ending a year long posting streak - https://simonwillison.net/2025/Jan/2/ending-a-year-long-posting-streak/#atom-everything - A year ago today I wrote about Tom Scott's legendary 10 year YouTube streak, in which he posted a new video once a week for the next ten years. Inspired by that, I also started my own. - I set myself the goal of posting something to my blog every day for a year. - Given how much happened in my chosen field of Large Language Models over the course of 2024 this wasn't as hard as I had expected! - On - Thu, 02 Jan 2025 00:25:34 +0000 - 2025-01-02T00:25:34+00:00 - Should URI::mysql Switch to DBD::MariaDB? https://www.perlmonks.org/?node_id=11163487 @@ -792,17 +862,6 @@ selected values with normalized paths to their locations. Tue, 31 Dec 2024 22:53:19 +0000 2024-12-31T22:53:19+00:00 - - Timeline of AI model releases in 2024 - https://simonwillison.net/2024/Dec/31/2024-ai-releases/#atom-everything - Timeline of AI model releases in 2024 -VB assembled this detailed timeline of every significant AI model release in 2024, for both API and open weight models. - - I'd hoped to include something like this in my 2024 review - I'm glad I didn't bother, because VB's is way better than anything I had planned. - VB built it with assistance from DeepSeek v3, incorporating data from this Artificial Intel - Tue, 31 Dec 2024 20:58:01 +0000 - 2024-12-31T20:58:01+00:00 - SQL/JSON Path Playground Update https://theory.github.io/sqljson/ @@ -908,13 +967,6 @@ VB assembled this detailed timeline of every significant AI model release in 202 Mon, 30 Dec 2024 10:32:16 +0000 2024-12-30T10:32:16+00:00 - - The Kraken Won - https://doc.searls.com/2024/12/29/the-kraken-won/ - Imagine what would have happened had Martin Winterkorn not imploded, and if Volkswagen, under his watch, had not become a datakraken (data sea-monster, or octopus), spying on drivers and passengers—just like every other car company. What would the world now be like if Volkswagen since 2014 had established itself as the only car maker not […] - Mon, 30 Dec 2024 03:42:10 +0000 - 2024-12-30T03:42:10+00:00 - AAMVAのMobile Drivers License Implementation Guidelinesを読む⑧ https://idmlab.eidentity.jp/2024/12/aamvamobile-drivers-license_01040135091.html @@ -1066,13 +1118,6 @@ in-browser playground. Sat, 21 Dec 2024 09:58:00 +0000 2024-12-21T09:58:00+00:00 - - Losing (or gaining) a Genius - https://doc.searls.com/2024/12/20/losing-or-gaining-a-genius/ - Sixteenth in the News Commons series. Dave Askins is shutting down the B Square Bulletin. This is tragic. And not just for Bloomington and Monroe County. (Dave covered the governing bodies of both like a glove.) It’s tragic for journalism. Because Dave is far more than an exemplar of reporting in service to the public. […] - Fri, 20 Dec 2024 21:27:06 +0000 - 2024-12-20T21:27:06+00:00 - No Water is Death https://herestomwiththeweather.com/2024/12/20/no-water-is-death/ @@ -1086,15 +1131,6 @@ in-browser playground. Fri, 20 Dec 2024 15:06:17 +0000 2024-12-20T15:06:17+00:00 - - Meta Contributes to 178K EUR to OpenStreetMap - https://werd.io/2024/meta-contributes-to-178k-eur-to-openstreetmap - - - [OpenStreetMap] Meta has contributed 178,710 Euros (an oddly specific number!) to OpenStreetMap. On one level: hooray for people contributing to open source. On another: Meta has a $1.5 Trillion market cap and uses OpenStreetMap in multiple applications. To be fair, it also provides direct non-monetary contributions, but regardless, when all is said and done, it's a bargain. - Fri, 20 Dec 2024 13:49:47 +0000 - 2024-12-20T13:49:47+00:00 - OpenID Foundatiion の理事選挙(2025)に立候補しました https://www.sakimura.org/2024/12/6631/ @@ -1102,15 +1138,6 @@ in-browser playground. Fri, 20 Dec 2024 08:50:07 +0000 2024-12-20T08:50:07+00:00 - - Companies issuing RTO mandates “lose their best talent”: Study - https://werd.io/2024/companies-issuing-rto-mandates-lose-their-best-talent-study - - - [Scharon Harding at Ars Technica] From the "gee, you don't say" department: "Return-to-office (RTO) mandates have caused companies to lose some of their best workers, a study tracking over 3 million workers at 54 "high-tech and financial" firms at the S&P 500 index has found. These companies also have greater challenges finding new talent, the report concluded." The st - Fri, 20 Dec 2024 02:43:03 +0000 - 2024-12-20T02:43:03+00:00 - モバイル運転免許証に関する用語を見ていきます https://idmlab.eidentity.jp/2024/12/blog-post.html @@ -1153,15 +1180,6 @@ in-browser playground. Wed, 18 Dec 2024 00:59:00 +0000 2024-12-18T00:59:00+00:00 - - Hello, Social Web 👋🏼 - https://werd.io/2024/hello-social-web - - - [A New Social] I'm psyched about this announcement: "We're A New Social, a new non-profit organization focused on building cross-protocol tools and services for the open social web. [...] The first project we'll take on to accomplish this mission is Bridgy Fed, a service that enables users of ActivityPub-based platforms like Mastodon, ATProto-based platforms like Bluesky, a - Tue, 17 Dec 2024 19:53:38 +0000 - 2024-12-17T19:53:38+00:00 - How Shopify Built Its Live Globe for Black Friday https://newsletter.pragmaticengineer.com/p/shopify-black-friday @@ -1277,7 +1295,7 @@ in-browser playground. Using Entra External ID with an Auth0 OpenID Connect identity provider https://damienbod.com/2024/12/09/using-entra-external-id-with-an-auth0-openid-connect-identity-provider/ - This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0. Client code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID supports federation using OpenID Connect and was […] + This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0. Client code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID supports federation using OpenID Connect and was … … Co Mon, 09 Dec 2024 05:39:40 +0000 2024-12-09T05:39:40+00:00 @@ -1295,13 +1313,6 @@ in-browser playground. Sun, 08 Dec 2024 16:10:40 +0000 2024-12-08T16:10:40+00:00 - - 2024年のGartner Magic Quadrant(アクセス管理分野)が発表されています - https://idmlab.eidentity.jp/2024/12/2024gartner-magic-quadrant.html - こんにちは、富士榮です。 この領域にいるとよくマーケティングなどで使われるのがガートナー社が出しているハイプサイクルやマジック・クァドラントです。今回はマジック・クァドラントですが、これは毎年アクセス管理をはじめ、様々な分野で発表されている各社のサービスが当該領域でどのようなポジション(リーダーなのかチャレンジャーなのか、など)に位置するのかを評価したものです。 今回はアクセス管理領域について発表されたので、リーダー領域に位置する各社がプレスを出しています。 出典)ガートナー 今回リーダーと位置付けられているのは、Microsoft、Okta、Ping Identityですね。 各社プレスを出しています。 Microsoft https://www.microsoft.com/en-us/security/blog/2024/12/05/8-years-as-a- - Sat, 07 Dec 2024 22:30:00 +0000 - 2024-12-07T22:30:00+00:00 - Integrity Properties for Federations https://self-issued.info/?p=2597 @@ -1354,7 +1365,7 @@ in-browser playground. Using ASP.NET Core with Azure Key Vault https://damienbod.com/2024/12/02/using-asp-net-core-with-azure-key-vault/ - This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience. Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment I develop using Visual Studio and manage multiple accounts and test environments. […] + This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience. Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment I develop using Visual Studio and manage multiple accounts and test environments. … … Mon, 02 Dec 2024 06:34:46 +0000 2024-12-02T06:34:46+00:00 @@ -1521,7 +1532,7 @@ path to find extension directories. ASP.NET Core BFF using OpenID Connect and Vue.js https://damienbod.com/2024/11/18/asp-net-core-bff-using-openid-connect-and-vue-js/ - This article shows how to implement a secure web application using Vue.js and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stacks as one web application. HTTP only secure cookies are used to persist the session. OpenIddict is used as the identity provider and the token […] + This article shows how to implement a secure web application using Vue.js and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stacks as one web application. HTTP only secure cookies are used to persist the session. OpenIddict is used as the identity provider and the token … … Continue reading → Mon, 18 Nov 2024 07:07:02 +0000 2024-11-18T07:07:02+00:00 @@ -1648,7 +1659,7 @@ build an in-browser playground for it. ASP.NET Core and Angular BFF using a YARP downstream API protected using certificate authentication https://damienbod.com/2024/11/04/asp-net-core-and-angular-bff-using-a-yarp-downstream-api-protected-using-certificate-authentication/ - This article demonstrates how to implement a downstream API protected by certificate authentication using Microsoft YARP reverse proxy in an ASP.NET Core web application. The application uses Angular for its UI and secures both the UI and the ASP.NET Core backend through a backend-for-frontend security architecture. The downstream API is secured with certificate authentication and […] + This article demonstrates how to implement a downstream API protected by certificate authentication using Microsoft YARP reverse proxy in an ASP.NET Core web application. The application uses Angular for its UI and secures both the UI and the ASP.NET Core backend through a backend-for-frontend security architecture. The downstream API is secured with certificate authentication and … … Continue read Mon, 04 Nov 2024 06:43:23 +0000 2024-11-04T06:43:23+00:00 @@ -1710,13 +1721,6 @@ museums I visited. Sat, 26 Oct 2024 16:21:07 +0000 2024-10-26T16:21:07+00:00 - - There’s an election coming up and I can’t believe we’re still debating it. - https://benwerd.medium.com/theres-an-election-coming-up-and-i-can-t-believe-we-re-still-debating-it-4f705d05bd51?source=rss-3b16402f5b9d------2 - How is it this close? Continue reading on Medium » - Sat, 26 Oct 2024 14:53:30 +0000 - 2024-10-26T14:53:30+00:00 - What Claude and ChatGPT can see on your screen https://blog.jonudell.net/2024/10/25/what-claude-and-chatgpt-can-see-on-your-screen/ @@ -1746,7 +1750,7 @@ museums I visited. Implement security headers for an ASP.NET Core API supporting OpenAPI Swagger UI https://damienbod.com/2024/10/21/implement-security-headers-for-an-api-supporting-openapi-swagger-ui/ - This article shows how to implement security headers for an application supporting an API and a swagger UI created from a open API in .NET 9. The security headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders Nuget packages from Andrew Lock. Code: https://github.com/damienbod/WebApiOpenApi Deploying a web application which supports both an API and a UI have different […] + This article shows how to implement security headers for an application supporting an API and a swagger UI created from a open API in .NET 9. The security headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders Nuget packages from Andrew Lock. Code: https://github.com/damienbod/WebApiOpenApi Deploying a web application which supports both an API and a UI have different … … Continu Mon, 21 Oct 2024 07:26:03 +0000 2024-10-21T07:26:03+00:00 @@ -1857,7 +1861,7 @@ EU will showcases some exemplary extension use cases. Microsoft Entra ID App-to-App security architecture https://damienbod.com/2024/10/07/microsoft-entra-id-app-to-app-security-architecture/ - This article looks at the different setups when using App-to-App security with Microsoft Entra ID (OAuth client credentials). Microsoft Entra App registrations are used to configure the OAuth clients and resources. For each tenant, an Enterprise application is created for the client App registration when the consent is granted. The claims in the access token […] + This article looks at the different setups when using App-to-App security with Microsoft Entra ID (OAuth client credentials). Microsoft Entra App registrations are used to configure the OAuth clients and resources. For each tenant, an Enterprise application is created for the client App registration when the consent is granted. The claims in the access token … … Continue reading → Mon, 07 Oct 2024 07:25:47 +0000 2024-10-07T07:25:47+00:00 @@ -1907,7 +1911,7 @@ The post Talk to Compensation Coach before signing showing agreement to maximize Implement a Geo-distance search using .NET Aspire, Elasticsearch and ASP.NET Core https://damienbod.com/2024/09/23/implement-a-geo-distance-search-using-net-aspire-elasticsearch-and-asp-net-core/ - This article shows how to implement a geo location search in an ASP.NET Core application using a LeafletJs map. The selected location can be used to find the nearest location with an Elasticsearch Geo-distance query. The Elasticsearch container and the ASP.NET Core UI application are setup for development using .NET Aspire. Code: https://github.com/damienbod/WebGeoElasticsearch Setup For […] + This article shows how to implement a geo location search in an ASP.NET Core application using a LeafletJs map. The selected location can be used to find the nearest location with an Elasticsearch Geo-distance query. The Elasticsearch container and the ASP.NET Core UI application are setup for development using .NET Aspire. Code: https://github.com/damienbod/WebGeoElasticsearch Setup For … … Contin Mon, 23 Sep 2024 08:56:43 +0000 2024-09-23T08:56:43+00:00 @@ -1939,13 +1943,6 @@ The post Talk to Compensation Coach before signing showing agreement to maximize Fri, 20 Sep 2024 19:59:11 +0000 2024-09-20T19:59:11+00:00 - - A Great AI RAG Resource - https://m-ruminer.medium.com/a-great-ai-rag-resource-769c808d76d5?source=rss-7e85224c0a32------2 - I came across a great AI Retrieval Augmented Generation resource. It is a Github repo: Advanced RAG Techniques: Elevating Your Retrieval-Augmented Generation Systems.I’ll just copy and paste their introduction here. “Welcome to one of the most comprehensive and dynamic collections of Retrieval-Augmented Generation (RAG) tutorials available today. This repository serves as a hub for cutting-edge t - Thu, 19 Sep 2024 09:53:14 +0000 - 2024-09-19T09:53:14+00:00 - Ask A [cybersecurity] Futurist https://heathervescent.medium.com/ask-a-cybersecurity-futurist-c0178a617317?source=rss-d2cae665ce3c------2 @@ -1956,7 +1953,7 @@ The post Talk to Compensation Coach before signing showing agreement to maximize Using Elasticsearch with .NET Aspire https://damienbod.com/2024/09/16/using-elasticsearch-with-net-aspire/ - This post shows how to use Elasticsearch in .NET Aspire. Elasticsearch is setup to use HTTPS with the dotnet developer certificates and and simple client can be implemented to query the data. Code: https://github.com/damienbod/keycloak-backchannel Setup Two services are setup to run in .NET Aspire. The first service is the official Elasticsearch docker container and deployed […] + This post shows how to use Elasticsearch in .NET Aspire. Elasticsearch is setup to use HTTPS with the dotnet developer certificates and and simple client can be implemented to query the data. Code: https://github.com/damienbod/keycloak-backchannel Setup Two services are setup to run in .NET Aspire. The first service is the official Elasticsearch docker container and deployed … … Continue reading → Mon, 16 Sep 2024 04:24:43 +0000 2024-09-16T04:24:43+00:00 @@ -1992,7 +1989,7 @@ The post Leverage $25K downpayment assistance to protect homebuyers & re Implement OpenID Connect Back-Channel Logout using ASP.NET Core, Keycloak and .NET Aspire https://damienbod.com/2024/09/09/implement-openid-connect-back-channel-logout-using-asp-net-core-keycloak-and-net-aspire/ - This post shows how to implement an OpenID Connect back-channel logout using Keycloak, ASP.NET Core and .NET Aspire. The Keycloak and the Redis cache are run as containers using .NET Aspire. Two ASP.NET Core UI applications are used to demonstrate the server logout. Code: https://github.com/damienbod/keycloak-backchannel Setup The applications are run and tested using .NET Aspire. […] + This post shows how to implement an OpenID Connect back-channel logout using Keycloak, ASP.NET Core and .NET Aspire. The Keycloak and the Redis cache are run as containers using .NET Aspire. Two ASP.NET Core UI applications are used to demonstrate the server logout. Code: https://github.com/damienbod/keycloak-backchannel Setup The applications are run and tested using .NET Aspire. … … Continue read Mon, 09 Sep 2024 06:09:51 +0000 2024-09-09T06:09:51+00:00 diff --git a/docs/index.html b/docs/index.html index a9ba7be94..70dde4f10 100755 --- a/docs/index.html +++ b/docs/index.html @@ -741,7 +741,7 @@

Built with

-Last Update 6:47 AM January 11, 2025 (UTC) +Last Update 6:47 AM January 12, 2025 (UTC)

Identity Blog Catcher

@@ -751,12 +751,838 @@

Identity Blog Catcher

+

+ Sunday, 12. January 2025 +

+ + + +
+ + +

+ Simon Willison +

+ + +

+ + + + Generative AI – The Power and the Glory + +

+ +
+ +
+ + + Generative AI – The Power and the Glory +Michael Liebreich's epic report for BloombergNEF on the current state of play with regards to generative AI, energy usage and data center growth. + I learned so much from reading this. If you're at all interested in the energy impact of the latest wave of AI tools I recommend spending some time with this article. + Just a few of the points that stood out to + + + + + + + +
+
+ + +
+ +

Generative AI – The Power and the Glory

+Michael Liebreich's epic report for BloombergNEF on the current state of play with regards to generative AI, energy usage and data center growth.

+

I learned so much from reading this. If you're at all interested in the energy impact of the latest wave of AI tools I recommend spending some time with this article.

+

Just a few of the points that stood out to me:

+ + This isn't the first time a leap in data center power use has been predicted. In 2007 the EPA predicted data center energy usage would double: it didn't, thanks to efficiency gains from better servers and the shift from in-house to cloud hosting. In 2017 the WEF predicted cryptocurrency could consume al the world's electric power by 2020, which was cut short by the first crypto bubble burst. Is this time different? Maybe. + Michael re-iterates (Sequoia) David Cahn's $600B question, pointing out that if the anticipated infrastructure spend on AI requires $600bn in annual revenue that means 1 billion people will need to spend $600/year or 100 million intensive users will need to spend $6,000/year. + Existing data centers often have a power capacity of less than 10MW, but new AI-training focused data centers tend to be in the 75-150MW range, due to the need to colocate vast numbers of GPUs for efficient communication between them - these can at least be located anywhere in the world. Inference is a lot less demanding as the GPUs don't need to collaborate in the same way, but it needs to be close to human population centers to provide low latency responses. + NVIDIA are claiming huge efficiency gains. "Nvidia claims to have delivered a 45,000 improvement in energy efficiency per token (a unit of data processed by AI models) over the past eight years" - and that "training a 1.8 trillion-parameter model using Blackwell GPUs, which only required 4MW, versus 15MW using the previous Hopper architecture". + Michael's own global estimate is "45GW of additional demand by 2030", which he points out is "equivalent to one third of the power demand from the world’s aluminum smelters". But much of this demand needs to be local, which makes things a lot more challenging, especially given the need to integrate with the existing grid. + Google, Microsoft, Meta and Amazon all have net-zero emission targets which they take very seriously, making them "some of the most significant corporate purchasers of renewable energy in the world". This helps explain why they're taking very real interest in nuclear power. + +

Elon's 100,000-GPU data center in Memphis currently runs on gas:

+
+

When Elon Musk rushed to get x.AI's Memphis Supercluster up and running in record time, he brought in 14 mobile natural gas-powered generators, each of them generating 2.5MW. It seems they do not require an air quality permit, as long as they do not remain in the same location for more than 364 days.

+
+ + +

Here's a reassuring statistic: "91% of all new power capacity added worldwide in 2023 was wind and solar".

+ + +

There's so much more in there, I feel like I'm doing the article a disservice by attempting to extract just the points above.

+

Michael's conclusion is somewhat optimistic:

+
+

In the end, the tech titans will find out that the best way to power AI data centers is in the traditional way, by building the same generating technologies as are proving most cost effective for other users, connecting them to a robust and resilient grid, and working with local communities. [...]

+

When it comes to new technologies – be it SMRs, fusion, novel renewables or superconducting transmission lines – it is a blessing to have some cash-rich, technologically advanced, risk-tolerant players creating demand, which has for decades been missing in low-growth developed world power markets.

+
+

(BloombergNEF is an energy research group acquired by Bloomberg in 2009, originally founded by Michael as New Energy Finance in 2004.) + +

Via Jamie Matthews

+ + +

Tags: ai, ethics, generative-ai, energy

+ + + + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ IdM Laboratory +

+ + +

+ + + + ECDSAに対応したゼロ知識証明の論文がGoogleから出ています + +

+ +
+ +
+ + + こんにちは、富士榮です。 AAMVAのモバイル運転免許証のガイドラインでも触れましたが、mdocやSD-JWTのリンク可能性へ対応するためには今後ゼロ知識証明が大切になります。 年末にGoogleの研究者が Anonymous credentials from ECDSA というタイトルでペーパーを出しています。 https://eprint.iacr.org/2024/2010 AIでイラスト生成すると色々とおかしなことになって面白いですねw アブストラクトの中からポイントを抜粋すると、従来のBBS+では暗号スイートへの対応に関する要件が厳しかったのでレガシーで対応できるようにECDSAでもできるようにしたよ、ということのようですね。 Part of the difficulty arises because schemes in the literature, + + + + +
+ + +
+ + + +
+
+ + +
+ +

こんにちは、富士榮です。

AAMVAのモバイル運転免許証のガイドラインでも触れましたが、mdocやSD-JWTのリンク可能性へ対応するためには今後ゼロ知識証明が大切になります。

年末にGoogleの研究者が

Anonymous credentials from ECDSA

というタイトルでペーパーを出しています。

https://eprint.iacr.org/2024/2010

AIでイラスト生成すると色々とおかしなことになって面白いですねw

アブストラクトの中からポイントを抜粋すると、従来のBBS+では暗号スイートへの対応に関する要件が厳しかったのでレガシーで対応できるようにECDSAでもできるようにしたよ、ということのようですね。

Part of the difficulty arises because schemes in the literature, such as BBS+, use new cryptographic assumptions that require system-wide changes to existing issuer infrastructure.  In addition,  issuers often require digital identity credentials to be *device-bound* by incorporating the device’s secure element into the presentation flow.  As a result, schemes like BBS+ require updates to the hardware secure elements and OS on every user's device.

その難しさの一部は、BBS+などの文献に記載されているスキームが、既存の発行者インフラストラクチャにシステム全体にわたる変更を必要とする新しい暗号化前提条件を使用していることに起因しています。さらに、発行者は、デバイスのセキュアエレメントを提示フローに組み込むことで、デジタルID認証をデバイスに紐づけることを求めることがよくあります。その結果、BBS+のようなスキームでは、すべてのユーザーのデバイスのハードウェアセキュアエレメントとOSのアップデートが必要になります。

In this paper, we propose a new anonymous credential scheme for the popular and legacy-deployed Elliptic Curve Digital Signature Algorithm (ECDSA) signature scheme.  By adding efficient zk arguments for statements about SHA256 and document parsing for ISO-standardized identity formats, our anonymous credential scheme is that first one that can be deployed *without* changing any issuer processes, *without* requiring changes to mobile devices, and *without* requiring non-standard cryptographic assumptions.

本稿では、広く普及し、レガシーシステムにも導入されている楕円曲線デジタル署名アルゴリズム(ECDSA)署名スキームのための新しい匿名クレデンシャルスキームを提案する。 SHA256に関する効率的なzk引数と、ISO標準化されたIDフォーマットの文書解析を追加することで、この匿名クレデンシャルスキームは、発行者側のプロセスを変更することなく、モバイルデバイスの変更を必要とすることなく、また、非標準の暗号化前提条件を必要とすることなく実装できる初めてのスキームです。

 なかなか期待できますね。生成速度に関してもこのような記載があります。

Our proofs for ECDSA can be generated in 60ms.  When incorporated into a fully standardized identity protocol such as the ISO MDOC standard, we can generate a zero-knowledge proof for the MDOC presentation flow in 1.2 seconds on mobile devices depending on the credential size. These advantages make our scheme a promising candidate for privacy-preserving digital identity applications.

当社のECDSAの証明書は60ミリ秒で生成できます。ISO MDOC標準のような完全に標準化されたアイデンティティプロトコルに組み込まれた場合、クレデンシャルのサイズにもよりますが、モバイルデバイス上でMDOCプレゼンテーションフロー用のゼロ知識証明書を1.2秒で生成できます。これらの利点により、当社の方式はプライバシー保護型デジタルアイデンティティアプリケーションの有望な候補となっています。

mdocのプレゼンテーション時にゼロ知識証明を1.2秒で生成、このくらいなら実用性がありそうですね。

論文の本文もPDFで閲覧できるようになっているので、おいおい見ていこうと思います。

 

 


+ + + + +
+ + +
+ + + +
+
+ + + +
+ +
+ + + +

Saturday, 11. January 2025

+
+ + +

+ Doc Searls Weblog +

+ + +

+ + + + What Are Stories? + +

+ +
+ +
+ + + Eighteenth in the New Commons series. Several generations ago, my pal Jerry and I were cutting a hole between the ceiling studs of a rented house in Durham, North Carolina. This was our first step toward installing a drop-down stairway to an attic space that had been closed since the house, a defunct parsonage for […] + + + + + + + +
+
+ + +
+ +

Eighteenth in the New Commons series.

+

+

Several generations ago, my pal Jerry and I were cutting a hole between the ceiling studs of a rented house in Durham, North Carolina. This was our first step toward installing a drop-down stairway to an attic space that had been closed since the house, a defunct parsonage for a dead church, was built early that century. We were eager to open the space, and to see what, if anything, might be in the time capsule it contained. In the midst of this, while both of us were choking on plaster dust, Jerry asked this profound question:

+

What is the base unit of human consciousness?

+

Without thinking, I answered,

+

The story.

+

I said that because I was a journalist. And no journalist who ever worked for a newspaper has gone long without hearing some editor say, What’s the story?

+

Editors ask that because stories are the only things that interest people. Simple as that.

+

I was 22 years old and in my first reporting job when the managing editor at my paper made clear that all stories have just three requirements. Not parts. Not sections. Requirements. Here they are:

+ + Character(s) + Problem(s) + Movement + +

That’s it.

+

This visual might help:

+

+

The character can be a person, a team, a cause, a political party, or any noun in which you can invest an emotion. Love and hate work best, but anything other than indifference will do. You can also have more than one of them, including yourself, since you are the main protagonist in every one of your life’s stories.

+

The problem can be anything that involves conflict or struggle. Problems keep you tuned in, turning the page, returning to see what happened, what will happen next, or what might happen. There can be any number of problems as well. You can soften these by calling them a challenge, but the point is the same. Stories don’t start with Happily Ever After.

+

Movement has to be forward. Thats it. You don’t need a conclusion unless the story ends.

+

Take away any of those requirements, and you don’t have a story. Or a life. Or anything interesting.

+

Look at everyone you care about, everything you want, every game you play, every project you work on, every test you take, every class you attend, every course you study, every language you learn. All are stories or parts of them, or pregnant with the promise of them. Because stories are what we care about.

+

Think of those requirements as three elements that make the molecule we call a story.

+

Now think of every news medium as a source of almost nothing but story molecules.

+

Is that all journalism should be?

+

I submit that stories are pretty much all journalism is.

+

I harp on this because journalism (the good and honest kind) works in the larger environment we call facts.

+

We can have better stories if we have more and better facts.

+

And, if we preserve both stories and facts, we’ll have better journalism.

+

My next post on this, tomorrow, will be about facts.

+

Can we make those more interesting as characters?

+

Only if we can make clear what their problems are, and how we—the story-tellers—can make the most interesting use of them.

+

Are you still wondering what Jerry and I found in that attic?

+

Alas, nothing. But it did make a useful space.

+

Decades later, it looks pretty good, and I see there’s a nice window in the front dormer:

+

+

The address is 1810 Lakewood Avenue. I also see the dead church behind it, at 1811 Palmer, is now a live community center:

+

+

I have more stories about both of them… How there was once a shoot-out in the back yard. How our cat (named Motorcat, because you could hear him purr in another room) was such an alpha predator that he took out countless large rats, and once ate a rabbit in the kitchen while we were gone, leaving just one little bone. How the least pesty mouse, called Old Half-tail, asked me with gestures to move him to the woods somewhere, so he’d be more safe. How we could still heat the place with anthracite coal in the original fireplaces that were built for it. The list goes on.

+

All of that is not much as history, but there are facts involved that might be interesting to the current owners, who (we can see) are working on expanding the place.

+

The world is full of such stuff. Let’s make better use of as much as we can find.

+

I’d like to start in Los Angeles, where the need for good facts is extremely high right now, and so many places where facts were kept—over twelve thousand homes, at last count—are gone.

+

We have the Internet now. We have AI. In these early decades of our new Digital Age, our collective tabula is still mostly rasa. Writing facts on it, and not just stories, should be Job One for journalism.

+ + + + +
+ + +
+ + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ Simon Willison +

+ + +

+ + + + Agents + +

+ +
+ +
+ + + Agents +Chip Huyen's 8,000 word practical guide to building useful LLM-driven workflows that take advantage of tools. + Chip starts by providing a definition of "agents" to be used in the piece - in this case it's LLM systems that plan an approach and then run tools in a loop until a goal is achieved. I like how she ties it back to the classic Norvig "thermostat" model - where an agent is "anythi + + + + + + + +
+
+ + +
+ +

Agents

+Chip Huyen's 8,000 word practical guide to building useful LLM-driven workflows that take advantage of tools.

+

Chip starts by providing a definition of "agents" to be used in the piece - in this case it's LLM systems that plan an approach and then run tools in a loop until a goal is achieved. I like how she ties it back to the classic Norvig "thermostat" model - where an agent is "anything that can perceive its environment and act upon that environment" - by classifying tools as read-only actions (sensors) and write actions (actuators).

+

There's a lot of great advice in this piece. The section on planning is particularly strong, showing a system prompt with embedded examples and offering these tips on improving the planning process:

+
+ + Write a better system prompt with more examples. + Give better descriptions of the tools and their parameters so that the model understands them better. + Rewrite the functions themselves to make them simpler, such as refactoring a complex function into two simpler functions. + Use a stronger model. In general, stronger models are better at planning. + +
+

The article is adapted from Chip's brand new O'Reilly book AI Engineering. I think this is an excellent advertisement for the book itself. + +

Via @chiphuyen.bsky.social

+ + +

Tags: ai-agents, llms, ai, generative-ai, llm-tool-use

+ + + + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ Doc Searls Weblog +

+ + +

+ + + + Aviation vs. Fire + +

+ +
+ +
+ + + 3:22pm—Hats off to Miles Archer for the links below, one of which goes here— —showing all the aircraft and their paths at once. You can start here at https://globe.adsbexchange.com/, which is kind of your slate that’s blank except for live aircraft over the Palisades Fire: Meanwhile all the media are reporting one home loss, in […] + + + + + + + +
+
+ + +
+ +

3:22pm—Hats off to Miles Archer for the links below, one of which goes here—

+

+

—showing all the aircraft and their paths at once. You can start here at https://globe.adsbexchange.com/, which is kind of your slate that’s blank except for live aircraft over the Palisades Fire:

+

+

Meanwhile all the media are reporting one home loss, in the 3000 block of Mandeville Canyon Road in Brentwood.

+

As you can see above, most of the action right now is on the north flank of the Palisades fire, along the crest of the ridge:

+

+

Here is a Chinook dropping water alongside Mandeville Canyon Road near where it adjoins Mulholland Drive:

+

+

I should pause here to say I’m just getting acquainted with ADS-B Exchange, the “World’s largest source of unfiltered flight data.” Here’s the About page. Bottom line: “In essence, ADS-B Exchange is more than just a flight-tracking website; it’s a dynamic, collaborative community committed to bringing transparency and inclusivity to the world of aviation.” It has a pile of social channels, and lots of ways to join in.

+

+

9:00am—The battle against wildfires in Los Angeles is almost entirely won by aerial firefighting. Helicopters and airplanes dropping water and retardants on fires and along perimeters saved Hollywood from the Sunset Fire two nights ago. They saved Encino from the Paradise Fire last night, and they are saving Brentwood right now. What we see above, thanks to KABC/7, is N43CU, a Boeing CH-47D Chinook, gathering water in Stone Canyon Reservoir to dump on the Palisades Fire in Brentwood. Here is its recent flight path, thanks to FlightRadar24:

+

+

And here is N60VC, a Sikorsky HH-60L Firehawk from Ventura County Fire Protection, filling up in the Encino Reservoir and running its routes over the fire:

+

+

And here is Cal Fire’s CFR605, a Sikorsky S-70i Firehawk:

+

+

They can do all this because the winds right now are relatively calm, as they also were last night above Encino and the night before above Hollywood. When the winds are too strong for them to handle, we have what happened to Pacific Palisades and Altadena.

+

Some flights are mysteries (at least to me), but seem to have some relevance, such as this Piper out of Riverside, weaving back and forth across three of the fires regions:

+

+

I want to know more about that one because I want to know more about everything, and to share as much as I can, as much for historical reasons as well as to satisfy current curiosities.

+

Anyway, if all goes well, the fire will burn a maximum spread of fuel (desert grass, forest, and chaparral), creating fire breaks good for a year or two—and then stop spreading short of houses and neighborhoods. Lord willin’ and the wind don’t come all thes fires will be sufficiently contained.

+

Also, if we’re lucky, Winter—our rainy season—will finally arrive, all the brown will turn green, and the fire season won’t return until late Spring.

+

Three bonus links:

+ + The Architects Of L.A.’s Wildfire Devastation, by Katya Schwenk in The Lever. She makes a sensible case against development in areas such as the ones being saved in Brentwood right now. But she doesn’t mention a second danger. That’s why you need to read— + Los Angeles Against the Mountains, by John McPhee in The New Yorker . That ran in 1988, and later in his book The Control of Nature. McPhee is the Shakespeare, the Rembrandt, the Beethoven, of nonfiction. What he says about where and how we live with danger is essential for making sense out of both the fires today,and the debris flows they assure when big rain comes. Which it will. A pull-quote: “The phalanxed communities of Los Angeles have pushed themselves hard against these mountains, an aggression that requires a deep defense budget to contend with the results.” + Making sense of what happened to Montecito, which I posted here in 2018. + + + + + +
+ + +
+ + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ Werdmüller on Medium +

+ + +

+ + + + Building an open web that protects us from harm + +

+ +
+ +
+ + + It’s not enough to be neutral. We must be allies. Continue reading on Medium » + + + + + + +
+ + +
+ + + +
+
+ + +
+ +

It’s not enough to be neutral. We must be allies.

Continue reading on Medium »

+ + + + +
+ + +
+ + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ Ben Werdmüller +

+ + +

+ + + + Building an open web that protects us from harm + +

+ +
+ +
+ + + + We live in a world where right-wing nationalism is on the rise and many governments, including the incoming Trump administration, are promising mass deportations. Trump in particular has discussed building camps as part of mass deportations. This question used to feel more hypothetical than it does today. Faced with this reality, it’s worth asking: who would stand by you if this kind of authori + + + + + + + +
+
+ + +
+ + +

We live in a world where right-wing nationalism is on the rise and many governments, including the incoming Trump administration, are promising mass deportations. Trump in particular has discussed building camps as part of mass deportations. This question used to feel more hypothetical than it does today.

Faced with this reality, it’s worth asking: who would stand by you if this kind of authoritarianism took hold in your life?

You can break allyship down into several key areas of life:

Who in your personal life is an ally? (Your friends, acquaintances, and extended family.) + Who in your professional life is an ally? (People you work with, people in partner organizations, and your industry.) + Who in civic life is an ally? (Your representatives, government workers, individual members of law enforcement, healthcare workers, and so on.) + Which service providers are allies? (The people you depend on for goods and services — including stores, delivery services, and internet services.) +

And in turn, can be broken down further:

Who will actively help you evade an authoritarian regime? + Who will refuse to collaborate with a regime’s demands? +

These two things are different. There’s also a third option — non-collaboration but non-refusal — which I would argue does not constitute allyship at all. This might look like passively complying with authoritarian demands when legally compelled, without taking steps to resist or protect the vulnerable. While this might not seem overtly harmful, it leaves those at risk exposed. As Naomi Shulman points out, the most dangerous complicity often comes from those who quietly comply. Nice people made the best Nazis.

For the remainder of this post, I will focus on the roles of internet service vendors and protocol authors in shaping allyship and resisting authoritarianism.

For these groups, refusing to collaborate means that you’re not capitulating to active demands by an authoritarian regime, but you might not be actively considering how to help people who are vulnerable. The people who are actively helping, on the other hand, are actively considering how to prevent someone from being tracked, identified, and rounded up by a regime, and are putting preventative measures in place. (These might include implementing encryption at rest, minimizing data collection, and ensuring anonymity in user interactions.)

If we consider an employer, refusing to collaborate means that you won’t actively hand over someone’s details on request. Actively helping might mean aiding someone in hiding or escaping to another jurisdiction.

These questions of allyship apply not just to individuals and organizations, but also to the systems we design and the technologies we champion. Those of us who are involved in movements to liberate social software from centralized corporations need to consider our roles. Is decentralization enough? Should we be allies? What kind of allies?

This responsibility extends beyond individual actions to the frameworks we build and the partnerships we form within open ecosystems. While building an open protocol that makes all content public and allows indefinite tracking of user activity without consent may not amount to collusion, it is also far from allyship. Partnering with companies that collaborate with an authoritarian regime, for example by removing support for specific vulnerable communities and enabling the spread of hate speech, may also not constitute allyship. Even if it furthers your immediate stated technical and business goals to have that partner on board, it may undermine your stated social goals. Short-term compromises for technical or business gains may seem pragmatic but risk undermining the ethics that underpin open and decentralized systems.

Obviously, the point of an open protocol is that anyone can use it. But we should avoid enabling entities that collude with authoritarian regimes to become significant contributors to or influencers of open protocols and platforms. While open protocols can be used by anyone, we must distinguish between passive use and active collaboration. Enabling authoritarian-aligned entities to shape the direction or governance of these protocols undermines their potential for liberation.

In light of Mark Zuckerberg’s clear acquiescence to the incoming Trump administration (for example by rolling back DEI, allowing hate speech, and making a series of bizarre statements designed to placate Trump himself), I now believe Threads should not be allowed to be an active collaborator to open protocols unless it can attest that it will not collude, and that it will protect vulnerable groups using its platforms from harm. I also think Bluesky’s AT Protocol decision to make content and user blocks completely open and discoverable should be revisited. I also believe there should be an ethical bill of rights for users on open social media protocols that authors should sign, which includes the right to privacy, freedom from surveillance, safeguards against hate speech, and strong protections for vulnerable communities.

As builders, users, and advocates of open systems, we must demand transparency, accountability, and ethical commitments from all contributors to open protocols. Without these safeguards, we risk creating tools that enable oppression rather than resisting it. Allyship demands more than neutrality — it demands action.

+ + + + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + + +

+ + + + The Good, The Bad, And The Stupid In Meta’s New Content Moderation Policies + +

+ +
+ +
+ + + + + [Mike Masnick in TechDirt] Mark Zuckerberg is very obviously running scared from the incoming Trump administration: "Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last + + + + + + + +
+
+ + +
+ + + +

[Mike Masnick in TechDirt]

Mark Zuckerberg is very obviously running scared from the incoming Trump administration:

"Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s."

He then announced a new set of moderation changes.

As Mike Masnick notes here, Facebook's moderation was terrible and has always been terrible. It tried to use AI to improve its moderation at scale, with predictable results. It simply hasn't worked, and that's often harmed vulnerable communities and voices in the process. So it makes sense to take a different approach.

But Zuckerberg is trying to paint these changes as being pro free speech, and that doesn't ring true. For example, trying to paint fact-checking as censorship is beyond stupid:

"Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is."

This is all of a piece with Zuckerberg's rolling back of much-needed DEI programs and his suggestion that most companies need more masculine energy. It's for show to please a permatanned audience of one and avoid existential threats to his business.

I would love to read the inside story in a few years. For now, we've just got to accept that everything being incredibly dumb is all part of living in 2025.

+

#Technology

+

[Link]

+ + + + + + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + + +

+ + + + Mullenweg Shuts Down WordPress Sustainability Team, Igniting Backlash + +

+ +
+ +
+ + + + + [Rae Morey at The Repository] The bananas activity continues over at Automattic / Matt Mullenweg's house: "Members of the fledgling WordPress Sustainability Team have been left reeling after WordPress co-founder Matt Mullenweg abruptly dissolved the team this week. [...] The disbandment happened after team rep Thijs Buijs announced in Making WordPress Slack on Wednesday tha + + + + + + + +
+
+ + +
+ + + +

[Rae Morey at The Repository]

The bananas activity continues over at Automattic / Matt Mullenweg's house:

"Members of the fledgling WordPress Sustainability Team have been left reeling after WordPress co-founder Matt Mullenweg abruptly dissolved the team this week.

[...] The disbandment happened after team rep Thijs Buijs announced in Making WordPress Slack on Wednesday that he was stepping down from his role, citing a Reddit thread Mullenweg created on Christmas Eve asking for suggestions to create WordPress drama in 2025."

Meanwhile, a day earlier, Automattic announced that it will ramp down its own contributions to WordPress:

"To recalibrate and ensure our efforts are as impactful as possible, Automattic will reduce its sponsored contributions to the WordPress project. This is not a step we take lightly. It is a moment to regroup, rethink, and strategically plan how Automatticians can continue contributing in ways that secure the future of WordPress for generations to come. Automatticians who contributed to core will instead focus on for-profit projects within Automattic, such as WordPress.com, Pressable, WPVIP, Jetpack, and WooCommerce. Members of the “community” have said that working on these sorts of things should count as a contribution to WordPress."

This is a genuinely odd thing to do. Yes, it's true that Automattic is at a disadvantage in the sense that it contributes far more to the open source project than other private companies. Free riders have long been a problem for open source innovators. But it's also why the company exists. I have questions about the balance of open source vs proprietary code in Automattic's future offerings. That's important because WordPress is the core value of its products and the open source core guarantees freedom from lock-in.

Is there a proprietary CMS coming down the wire? Is this bizarre board activity behind the scenes? Is something else going on? This whole situation still feels to me like there's another shoe ready to drop - and the longer it goes on, the bigger that shoe seems to be. I hope they don't completely squander the trust and value they've been building for decades.

+

#Technology

+

[Link]

+ + + + + + + +
+
+ + + +
+ +
+ + + + +
+ + + +
+ + +

+ Michael Ruminer +

+ + +

+ + + + AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b + +

+ +
+ +
+ + + In this post, I cover using LlamaIndex LlamaParse in auto mode to parse a PDF page containing a table, using a Hugging Face local embedding model, and using local Llama 3.1 8b via Ollama to perform naive Retrieval Augmented Generation (RAG). That’s a mouthful. I won’t go into how to setup Ollama and Llama 3.1 8b; this post assumes it is running. First off, you can find the code for this in m + + + + +
+ + +
+ + + +
+
+ + +
+ +

In this post, I cover using LlamaIndex LlamaParse in auto mode to parse a PDF page containing a table, using a Hugging Face local embedding model, and using local Llama 3.1 8b via Ollama to perform naive Retrieval Augmented Generation (RAG). That’s a mouthful. I won’t go into how to setup Ollama and Llama 3.1 8b; this post assumes it is running.

First off, you can find the code for this in my LlamaIndex_Test Github repo under Test1/src folder. At the time of this writing there is a Test0 and a Test1. To see the post about Test0 code see Using LlamaIndex — Part 1 OpenAI.

The code uses a .env and load_dotenv() to populate the needed LLAMA_CLOUD_API_KEY. I recommend that if you have an OPENAI_API_KEY entry in the .env that you comment it out for this experiment to prove to yourself that the embedding and LLM are local and not OpenAI. See the part 1 post for more details on the LLAMA_CLOUD_API_KEY.

#OPENAI_API_KEY=YOUR_API_KEY
LLAMA_CLOUD_API_KEY=YOUR_API_KEY

The pip install dependencies I put as comments at the top of the python file. There is also a requirements.txt for the project as a whole that covers all the “Test” experiments package requirements.

# pip install llama-index-embeddings-huggingface
# pip install llama-index-llms-ollama
# pip install llama-index-core llama-parse llama-index-readers-file

The nice thing about LlamaIndex LlamaParse is that it provides an auto mode that will use premium mode when specified criteria are met. In this experiment, I have set auto mode on with triggers for mode change on in- page images or tables. Also, to save on parsing credit usage in LlamaParse and because, for this example, it is all that is needed, I have set the pages to be parsed to PDF page 9 only (note that PDF page 9 is target page 8 to LlamaParse because it uses a 0 based page index). Like the part 1 post, I am using an output of markdown because it provides greater context to the LLM; though, I did try it with result_type=text and received the proper query response despite the answer to the query being in a table.

# set LlamaParse for markdown output and auto_mode only parsing page 8
parser = LlamaParse(
result_type="markdown",
auto_mode=True,
auto_mode_trigger_on_image_in_page=True,
auto_mode_trigger_on_table_in_page=True,
target_pages="8",
verbose=True
)

So that you don’t have to open the PDF document that gets parsed to understand the input below is a screenshot of the page.

As in part 1, I use LlamaParse.load_data to read the page and parse it. Since it has a table in-page and we are in auto mode it will automatically use Premium mode to potentially better handle the page and table. This will cause the page parse to cost 15 credits on LlamaIndex. Note that LlamaIndex will cache your parsed page for 48 hours unless you specify otherwise or change the parse parameters which allows you to run the code more than once and only get the credit cost once. I did try using the default “accurate” mode by removing the auto_mode parameters on the LlamaParse and it still parsed the table properly and returned the proper answer to the query — but this is a sample for showing the use of “auto mode” so just pretend that is not the case.

If you want to see the output of the parser, uncomment the print command after the documents variable is populated. I like to then paste it into a markdown viewer to see it as rendered markdown output. See the below image for that output.

with open(f"../../sample_docs/{file_name}", "rb") as file_to_parse:
# LlamaParse will cache a parsed document 48 hours if the parse parameters are not changed
# thus not incuring additional parse cost if you run this multiple times for testing purposes
# see the history tab in the LlamaParse dashboard for the project to confirm that
# credits used = 0 for subsequent runs
#
# must provide extra_info with file_name key when passing file object
documents = parser.load_data(file_to_parse, extra_info=extra_info)
# to manually check the output uncomment the below
#print(documents[0].text)

I like to set the default settings for LLM and embedding model so that I don’t need to pass them around as parameters. Here is where I set the embedding model to a Hugging Face provided model. When you run the python for the first time it will pull down the embedding model automatically — nice!

# set the default embeddings and llm so that it doesn't have to be passed around
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
Settings.llm = Ollama(model="llama3.1:latest", request_timeout=120.0)

The next part of the code does the same that it did in Part 1 except that this time the VectoreStoreIndex and the query engine use the models I set in the Settings singleton versus the LlamaIndex default of OpenAI.

# index the parsed documents using the default embedding model
index = VectorStoreIndex.from_documents(documents)

# generate a query engine for the index using the default llm
query_engine = index.as_query_engine()

# provide the query and output the results
query = "What is the latency in seconds for Nova Micro?"
response = query_engine.query(query)
print(response)

If all goes well you should get the response output as 0.5 and if you look back at the table from the page you’ll see that is correct.

(.venv) PS C:\python\LlamaIndex_Test\Test1\src> python parse_ollama.py
Started parsing the file under job_id 37dce328-aaa7-499b-afe9-498c32b63944
.0.5

To validate that the value was coming from the RAG provided PDF page and not the the LLMs inherent “knowledge”, I asked a similar question via the command line to Ollama without providing the RAG context— output below:

PS C:\temp> ollama run llama3.1:latest "what is the latency in seconds for Nova Micro Amazon LLM model?"
I don't have access to specific information about the latency of the Nova Micro Amazon LLM (Large Language Model)
model. The details regarding models like this, especially concerning their performance metrics such as latency,
are typically available from the developers or through official documentation and may be subject to change. If
you're looking for accurate and up-to-date information on this topic, I recommend checking directly with Nova
Micro's resources or contacting them for the most current data.

There you have it. But I am not done quite yet in reporting my results. In LlamaIndex’s examples, they used this PDF but used PDF page 1 which contains an image. See below an image of the page.

They use this page to demonstrate how LlamaParse in auto mode moves into premium mode for the page parsing because of the image and then creates a mermaid diagram from the image because it recognizes the image is of a diagram. Below is what they report as the outcome in part.

# The Amazon Nova Family of Models:
# Technical Report and Model Card

Amazon Artificial General Intelligence

```mermaid
graph TD
A[Text] --> B[Nova Lite]
C[Image] --> B
D[Video] --> E[Nova Pro]
F[Code] --> E
G[Docs] --> E
B --> H[Text]
B --> I[Code]
E --> H
E --> I
J[Text] --> K[Nova Micro]
L[Code] --> K
K --> M[Text]
K --> N[Code]
O[Text] --> P[Nova Canvas]
Q[Image] --> P
P --> R[Image]
S[Text] --> T[Nova Reel]
U[Image] --> T
T --> V[Video]

style B fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#f9f,stroke:#333,stroke-width:2px
style K fill:#f9f,stroke:#333,stroke-width:2px
style P fill:#f9f,stroke:#333,stroke-width:2px
style T fill:#f9f,stroke:#333,stroke-width:2px

classDef input fill:#lightblue,stroke:#333,stroke-width:1px;
class A,C,D,F,G,J,L,O,Q,S,U input;

classDef output fill:#lightgreen,stroke:#333,stroke-width:1px;
class H,I,M,N,R,V output;
```

Figure 1: The Amazon Nova family of models

When I tried this I did not get the same outcome from the parse. It did not even attempt to generate a mermaid diagram. I received the following output for the diagram image section; far from their professed output.

The Amazon Nova Family of Models:
Technical Report and Model Card
Amazon Artificial General Intelligence
Nova
Lite Nova
Nova Micro Ix
Pro <l> <l > </>
A Ix
</>
=
Nova Nova
Canvas Reel
Figure 1: The Amazon Nova family of models

In the experiment, everything is local except LlamaIndex which is nice. I hope that this example is of use to you.

+ + + + +
+ + +
+ + + +
+
+ + + +
+ +
+ + + + +
+ + +
@@ -805,6 +1631,11 @@

It’s important to note that winds are calm, allowing aircraft to do their work. This was not possible while Pacific Palisades and Altadena were largely destroyed by the Palisades and Eaton Fires. It was possible during the Sunset and Kenneth fires.

KABC/7 ha dramatic video, but also reports that the fire appears to be contained. One grab:

+

It’s important to note that dramatic pictures can, without meaning to, tell stories that aren’t quite true, or are less true than the picture suggests. For example, in my coverage of the Gap Fire near Santa Barbara in 2008, I used this picture of the Santa Barbara Mission.

+

+

When I shot that, I was by a rose garden about 800 feet east of the Mission, looking west past a fire 8 miles away, toward the setting Sun, 8 million miles away. Also, I underexposed the photo to make everything legible (and good photographically). I explained all that in text of my report. Still, many people looked at the picture and assumed that the Mission was on fire. Likewise, it’s easy to look at TV images of tiny helicopters floating in space above a flaming ridge and a wall of flames, as we see here as the index image used by KABC/7 for its video on the fires—

+

+

—and assume that this is a losing battle for the chopper people. It’s a great photograph, but the story it seems to tell is too simple, and too easily misleading.

 

@@ -822,7 +1653,7 @@

@@ -897,7 +1728,7 @@

@@ -935,7 +1766,7 @@

- Seventeenth in the News Commons series. While I’ve been writing about the #LAfires, this has been my main media dashboard: Those are tabs for five TV stations, one radio station, and one newspaper: KNBC/4 “4 Los Angeles” KTLA/5 “LA’s Very Own” KABC/7  “7 Eyewitness News” KCAL/9 “KCAL NEWS CBS Los Angeles” KTTV/11 “Fox 11 Los […] + Seventeenth in the News Commons series. That collection of tabs is my dashboard of major media that inform my writing about the #LAfires. There are tabs for five TV stations, one radio station, and one newspaper: KNBC/4 “4 Los Angeles” KTLA/5 “LA’s Very Own” KABC/7  “7 Eyewitness News” KCAL/9 “KCAL NEWS CBS Los Angeles” KTTV/11 […] @@ -953,9 +1784,8 @@

Seventeenth in the News Commons series.

-

While I’ve been writing about the #LAfires, this has been my main media dashboard:

-

Those are tabs for five TV stations, one radio station, and one newspaper:

+

That collection of tabs is my dashboard of major media that inform my writing about the #LAfires. There are tabs for five TV stations, one radio station, and one newspaper:

KNBC/4 “4 Los Angeles” KTLA/5 “LA’s Very Own” @@ -991,11 +1821,13 @@

Los Angeles Fires and Aftermath (January 9)

—you’ll see which of those I relied on most.

-

Now I’d like to zero in on this, which KABC/7 uses as a bumper between ads and segments:

+

Finally, all of us. I like this, which KABC/7 uses as a bumper between ads and segments:

-

I like TOGETHER. The question is, With who? Viewers, presumably. But how about the rest of what we might call the media ecosystem?

+

I like SoCal Strong because  Boston Strong worked after the bombings in 2014, Houston Strong worked after Hurricane Harvey in 2017, Parkland Strong worked after the shootings there in 2018, and Louisiana Strong works for bad whatever in that state.

+

Now, what does TOGETHER mean?

+

Viewers, presumably. But how about the rest of what we might call the media ecosystem?

We see a little of that with the LAist-KCAL partnership. But I think cooperation can go a lot farther than that. Not in any official or even conscious way, but rather by compiling and relying together on the largest possible collection of facts about the future and the past—especially as those facts pertain to the #LAfires, their aftermath, and recovery. And by basing stories on those facts as much as possible.

-

And that also goes to everyone in social media, podcasting, and the rest of the fact-based news ecosystem.

+

That also goes to everyone in social media, podcasting, and the rest of the fact-based news ecosystem.

Next, let’s talk about stories. Tune in tomorrow.

 

@@ -1014,7 +1846,7 @@

@@ -1086,7 +1918,7 @@

@@ -1194,7 +2026,7 @@

@@ -1375,7 +2207,7 @@

@@ -1447,7 +2279,7 @@

@@ -1524,7 +2356,7 @@

@@ -1599,7 +2431,7 @@

@@ -1785,7 +2617,7 @@

@@ -1899,7 +2731,7 @@

@@ -2006,7 +2838,7 @@

@@ -2070,7 +2902,7 @@

@@ -2243,7 +3075,7 @@

@@ -2325,7 +3157,7 @@

@@ -2397,7 +3229,7 @@

@@ -2465,7 +3297,7 @@

@@ -2535,7 +3367,7 @@

@@ -2571,7 +3403,7 @@

-  With over 1000 AI communities deployed in 2024, our Next AI cohort begins.. Coding, Computer Science, Artificial Intelligence, and Entrepreneurial Skill Development, With mentor support of all learners, students and teachers. Unlimited learning by design  By kidOYO at OYOclass.com  +  With over 1000 AI communities deployed in 2024, the next AI cohort begins.. Coding, Computer Science, Artificial Intelligence, Entrepreneurial Skill Development, Teacher PD,  With mentor support of all learners, students and teachers. Unlimited learning by design: (own root)  OYO®  AI  by kidOYO®  at OYOclass.com, Educati @@ -2594,7 +3426,7 @@

-->
-

 With over 1000 AI communities deployed in 2024, our Next AI cohort begins..












Coding, Computer Science, Artificial Intelligence, and Entrepreneurial Skill Development,

With mentor support of all learners, students and teachers. Unlimited learning by design 

By kidOYO at OYOclass.com 

+

 With over 1000 AI communities deployed in 2024, the next AI cohort begins..












Coding, Computer Science, Artificial Intelligence, Entrepreneurial Skill Development, Teacher PD, 

With mentor support of all learners, students and teachers. Unlimited learning by design: (own root) 

OYO®  AI  by kidOYO®  at OYOclass.com, Educational Software Services.

@@ -2611,7 +3443,7 @@

@@ -2698,7 +3530,7 @@

@@ -2777,7 +3609,7 @@

@@ -2855,7 +3687,7 @@

@@ -2933,7 +3765,7 @@

@@ -3017,7 +3849,7 @@

@@ -3086,7 +3918,7 @@

@@ -3153,7 +3985,7 @@

@@ -3225,7 +4057,7 @@

@@ -3304,7 +4136,7 @@

@@ -3378,7 +4210,7 @@

@@ -3455,7 +4287,7 @@

@@ -3535,7 +4367,7 @@

@@ -3601,7 +4433,7 @@

@@ -3669,7 +4501,7 @@

@@ -3740,7 +4572,7 @@

@@ -3814,7 +4646,7 @@

@@ -3892,7 +4724,7 @@

@@ -3956,7 +4788,7 @@

@@ -4019,7 +4851,7 @@

@@ -4098,7 +4930,7 @@

@@ -4168,7 +5000,7 @@

@@ -4239,7 +5071,7 @@

@@ -4300,7 +5132,7 @@

@@ -4370,7 +5202,7 @@

@@ -4441,7 +5273,7 @@

@@ -4513,7 +5345,7 @@

@@ -4584,7 +5416,7 @@

@@ -4676,7 +5508,7 @@

@@ -4776,7 +5608,7 @@

@@ -4848,7 +5680,7 @@

@@ -4914,7 +5746,7 @@

@@ -4991,7 +5823,7 @@

@@ -5048,7 +5880,7 @@

-->
- Using LlamaIndex Part 1 — OpenAI

I have started to experiment with LlamaIndex for use in Retrieval Augmented Generation (RAG) document parsing and indexing. My results were mixed on the simple page provided. This is part 1, where I make a short post on LlamaIndex with OpenAI as the LLM component. I expect part 2 to be LlamaIndex with Ollama and Llama3–8b as the LLM components.

This is a very short chunk of code. I also used the LlamaIndex Parse browser-based tool to see if I received different outputs. As one would expect, I did not. You can access the browser-based tool by opening a LlamaIndex account and choosing the “Parse” tool in your dashboard. You’ll need an account if you plan to use the code I provide and you will also need to generate an API key from your LlamaIndex dashboard. One of the great things about LlamaIndex is that for a paid tool it is generous in its free usage; 1000 credits PER DAY. In “accurate” mode, it is 1 credit per page; in “premium” mode, it is 15 credits per page. For my simple one page example the output between the two did not differ.

First the small snippet of code.

# pip install llama-index-embeddings-openai llama-index-llms-openai
# pip install llama-index-core llama-parse llama-index-readers-file

from llama_parse import LlamaParse
from llama_index.core import VectorStoreIndex
from dotenv import load_dotenv


load_dotenv()

parser = LlamaParse(result_type="markdown", verbose=True)

file_name = "ssi-page-5.pdf"
extra_info = {"file_name": file_name}

with open(f"../../sample_docs/{file_name}", "rb") as file_to_parse:
# must provide extra_info with file_name key when passing file object
documents = parser.load_data(file_to_parse, extra_info=extra_info)
# to manually check the MD output uncomment the below
# print(documents[0].text)

# index the parsed documents
index = VectorStoreIndex.from_documents(documents)

# generate a query engine for the index
query_engine = index.as_query_engine()

# provide the query and output the results
query = "what are the principles of SSI?"
response = query_engine.query(query)
print(response)

You can find this code and a few sample documents, including the document used in this code in my LlamaIndex_Test Github repo with the code specifically under the Test0 folder.

Note that I don’t set an LLM or an embedding model. LlamaIndex uses OpenAI as the default LLM and OpenAI’s text-embedding-ada-002. You will need an OpenAI API key to go along with the LlamaIndex key. My code loads them from the .env to environmental variables and if they are named appropriately those variables will be found by default. Below is a .env example.

OPENAI_API_KEY=YOUR_API_KEY
LLAMA_CLOUD_API_KEY=YOUR_API_KEY

In the code above I am using a single-page PDF, “ssi-page-5.pdf”. It is page 5 of the larger document, “Self-Sovereign Identity A Systematic Review Mapping and Taxonomy.pdf”. If you plan to send LlamaParse a larger document but use the API properties to tell it only to parse a subset of pages from the document keep in mind that LlamaParse starts at page 0. The first time I tried this I had an off-by-one issue because I assumed page 1 of the document was, you know, page 1. It was page 0. This is understandable from a programming standpoint but caught me off guard anyway.

In the example code, I opened a file directly but LlamaIndex provides a directory reader with filters, if you desire to use that instead. The results I got back on the LLM query were spot on as would be expected on a single page of context with a well-outlined section pertinent to my simple query.

You don’t really need the creation of the vector index, query engine and query/response to test out LlamaIndex parsing. Just uncomment line 23 in the above code (line 19 in the repo code) comment out everything below it and get the parsed output.

Premium Mode and Auto Mode and Less than Expected Outcomes

In the code, I didn’t try out premium mode or auto mode. I intend to make a separate post about auto mode. I did try them in the LlamaIndex Parse tool. In both, I expected the image at the top of the page to get output as an image in the “Images” tab of the Parse output tool, but it didn’t.

The image at the top of the page is below as a screen capture.

This was disappointing. I’m not sure why this did not provide the expected outcome.

There you have it. A simple bit of code to parse using LlamaIndex. What makes it different from other parsers I have tried (all open source) is that it spits out the results in markdown, if desired, which is better than the usual plain text I received in other tools. The markdown provides the LLM more context even if in my simple case it was not of value. The other is that in theory, it will better parse images, tables etc., but as I explained I did not get that result. :-( I’ll continue to experiment with it, especially on more complicated pages such as ones that contain a table and in auto mode via code.

+ Using LlamaIndex Part 1 — OpenAI

I have started to experiment with LlamaIndex for use in Retrieval Augmented Generation (RAG) document parsing and indexing. My results were mixed on the simple page provided. This is part 1, where I make a short post on LlamaIndex with OpenAI as the LLM component. I expect part 2 to be LlamaIndex with Ollama and Llama3–8b as the LLM components.

This is a very short chunk of code. I also used the LlamaIndex Parse browser-based tool to see if I received different outputs. As one would expect, I did not. You can access the browser-based tool by opening a LlamaIndex account and choosing the “Parse” tool in your dashboard. You’ll need an account if you plan to use the code I provide and you will also need to generate an API key from your LlamaIndex dashboard. One of the great things about LlamaIndex is that for a paid tool it is generous in its free usage; 1000 credits PER DAY. In “accurate” mode, it is 1 credit per page; in “premium” mode, it is 15 credits per page. For my simple one page example the output between the two did not differ.

First the small snippet of code.

# pip install llama-index-embeddings-openai llama-index-llms-openai
# pip install llama-index-core llama-parse llama-index-readers-file

from llama_parse import LlamaParse
from llama_index.core import VectorStoreIndex
from dotenv import load_dotenv


load_dotenv()

parser = LlamaParse(result_type="markdown", verbose=True)

file_name = "ssi-page-5.pdf"
extra_info = {"file_name": file_name}

with open(f"../../sample_docs/{file_name}", "rb") as file_to_parse:
# must provide extra_info with file_name key when passing file object
documents = parser.load_data(file_to_parse, extra_info=extra_info)
# to manually check the MD output uncomment the below
# print(documents[0].text)

# index the parsed documents
index = VectorStoreIndex.from_documents(documents)

# generate a query engine for the index
query_engine = index.as_query_engine()

# provide the query and output the results
query = "what are the principles of SSI?"
response = query_engine.query(query)
print(response)

You can find this code and a few sample documents, including the document used in this code in my LlamaIndex_Test Github repo with the code specifically under the Test0 folder.

Note that I don’t set an LLM or an embedding model. LlamaIndex uses OpenAI as the default LLM and OpenAI’s text-embedding-ada-002. You will need an OpenAI API key to go along with the LlamaIndex key. My code loads them from the .env to environmental variables and if they are named appropriately those variables will be found by default. Below is a .env example.

OPENAI_API_KEY=YOUR_API_KEY
LLAMA_CLOUD_API_KEY=YOUR_API_KEY

In the code above I am using a single-page PDF, “ssi-page-5.pdf”. It is page 5 of the larger document, “Self-Sovereign Identity A Systematic Review Mapping and Taxonomy.pdf”. If you plan to send LlamaParse a larger document but use the API properties to tell it only to parse a subset of pages from the document keep in mind that LlamaParse starts at page 0. The first time I tried this I had an off-by-one issue because I assumed page 1 of the document was, you know, page 1. It was page 0. This is understandable from a programming standpoint but caught me off guard anyway.

In the example code, I opened a file directly but LlamaIndex provides a directory reader with filters, if you desire to use that instead. The results I got back on the LLM query were spot on as would be expected on a single page of context with a well-outlined section pertinent to my simple query.

You don’t really need the creation of the vector index, query engine and query/response to test out LlamaIndex parsing. Just uncomment line 23 in the above code (line 19 in the repo code) comment out everything below it and get the parsed output.

Premium Mode and Auto Mode and Less than Expected Outcomes

In the code, I didn’t try out premium mode or auto mode. I intend to make a separate post about auto mode. I did try them in the LlamaIndex Parse tool. In both, I expected the image at the top of the page to get output as an image in the “Images” tab of the Parse output tool, but it didn’t.

The image at the top of the page is below as a screen capture.

This was disappointing. I’m not sure why this did not provide the expected outcome.

There you have it. A simple bit of code to parse using LlamaIndex. What makes it different from other parsers I have tried (all open source) is that it spits out the results in markdown, if desired, which is better than the usual plain text I received in other tools. The markdown provides the LLM more context even if in my simple case it was not of value. The other is that in theory, it will better parse images, tables etc., but as I explained I did not get that result. :-( I’ll continue to experiment with it, especially on more complicated pages such as ones that contain a table and in auto mode via code.

You can find part 2 as “AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b”. The example in part 2 uses LlamaParse auto mode.

@@ -5065,7 +5897,7 @@

@@ -5138,7 +5970,7 @@

@@ -5213,7 +6045,7 @@

@@ -5302,7 +6134,7 @@

@@ -5369,7 +6201,7 @@

@@ -5434,7 +6266,7 @@

@@ -5514,7 +6346,7 @@

@@ -5586,7 +6418,7 @@

@@ -5654,7 +6486,7 @@

@@ -5725,7 +6557,7 @@

@@ -5795,7 +6627,7 @@

@@ -5861,7 +6693,7 @@

@@ -5936,7 +6768,7 @@

@@ -6031,7 +6863,7 @@

@@ -6108,7 +6940,7 @@

@@ -6183,7 +7015,7 @@

@@ -6316,7 +7148,7 @@

@@ -6392,99 +7224,7 @@

- - - -

- - - - -
- - - -
- - - -

- - - - Ending a year long posting streak - -

- -
- -
- - - A year ago today I wrote about Tom Scott's legendary 10 year YouTube streak, in which he posted a new video once a week for the next ten years. Inspired by that, I also started my own. - I set myself the goal of posting something to my blog every day for a year. - Given how much happened in my chosen field of Large Language Models over the course of 2024 this wasn't as hard as I had expected! - On - - - - - - -
- - -
- - - -
-
- - -
- -

A year ago today I wrote about Tom Scott's legendary 10 year YouTube streak, in which he posted a new video once a week for the next ten years. Inspired by that, I also started my own.

-

I set myself the goal of posting something to my blog every day for a year.

-

Given how much happened in my chosen field of Large Language Models over the course of 2024 this wasn't as hard as I had expected!

-

One of the lessons I learned from Tom is that it's much healthier for a streak to have a predetermined end - that way the streak can act as a goal that doesn't turn into an ongoing imposition.

-

I'm calling it: this streak is done. According to my custom dashboard I hit 367 days - December 31st 2023 to December 31st 2024, inclusive (it was a leap year) - 1,151 posts in total.

- -

- -

I'm going to drop back to a much more reasonable target of at least one long-form post per week and at least three days per week with a link or quote - see My approach to running a link blog for how I think about that kind of content.

-

Posting daily has been fun, but it definitely impacted my productivity on my other projects. My blog runs on UTC so it also resulted in a minor panic coming up to 4pm Pacific coast time if I hadn't posted anything yet!

- -

Almost every post in the streak came out in 2024, so my faceted search engine for 2024 provides a way to explore them.

- -

My 2024 archive page also serves up this illustrative tag cloud:

- -

- -

Tags: blogging, tom-scott, streaks

- - - - -
- - -
- - - -
-
- -
@@ -6598,7 +7338,7 @@

@@ -6663,7 +7403,7 @@

@@ -6818,7 +7558,7 @@

@@ -6894,7 +7634,7 @@

@@ -6955,7 +7695,7 @@

@@ -7031,7 +7771,7 @@

@@ -7094,7 +7834,7 @@

@@ -7160,7 +7900,7 @@

@@ -7240,7 +7980,7 @@

@@ -7317,7 +8057,7 @@

@@ -7391,7 +8131,7 @@

@@ -7490,7 +8230,7 @@

@@ -7565,7 +8305,7 @@

@@ -7641,7 +8381,7 @@

@@ -7718,7 +8458,7 @@

@@ -7794,7 +8534,7 @@

@@ -7870,96 +8610,7 @@

- - - -

- - - - -
- - - -
- - -

- Simon Willison -

- - -

- - - - Timeline of AI model releases in 2024 - -

- -
- -
- - - Timeline of AI model releases in 2024 -VB assembled this detailed timeline of every significant AI model release in 2024, for both API and open weight models. - - I'd hoped to include something like this in my 2024 review - I'm glad I didn't bother, because VB's is way better than anything I had planned. - VB built it with assistance from DeepSeek v3, incorporating data from this Artificial Intel - - - - - - -
- - -
- - - -
-
- - -
- -

Timeline of AI model releases in 2024

-VB assembled this detailed timeline of every significant AI model release in 2024, for both API and open weight models.

-

-

I'd hoped to include something like this in my 2024 review - I'm glad I didn't bother, because VB's is way better than anything I had planned.

-

VB built it with assistance from DeepSeek v3, incorporating data from this Artificial Intelligence Timeline project by NHLOCAL. The source code (pleasingly simple HTML, CSS and a tiny bit of JavaScript) is on GitHub. - -

Via @reach_vb

- - -

Tags: ai-assisted-programming, generative-ai, deepseek, ai, llms

- - - - -
- - -
- - - -
-
- -
@@ -8050,7 +8701,7 @@

@@ -8126,7 +8777,7 @@

@@ -8204,7 +8855,7 @@

@@ -8280,7 +8931,7 @@

@@ -8358,7 +9009,7 @@

@@ -8426,7 +9077,7 @@

@@ -8506,7 +9157,7 @@

@@ -8578,7 +9229,7 @@

@@ -8652,7 +9303,7 @@

@@ -8720,7 +9371,7 @@

@@ -8829,7 +9480,7 @@

@@ -8901,7 +9552,7 @@

@@ -9039,87 +9690,7 @@

- - - -

- - - - -
- - - -
- - -

- Doc Searls Weblog -

- - -

- - - - The Kraken Won - -

- -
- -
- - - Imagine what would have happened had Martin Winterkorn not imploded, and if Volkswagen, under his watch, had not become a datakraken (data sea-monster, or octopus), spying on drivers and passengers—just like every other car company. What would the world now be like if Volkswagen since 2014 had established itself as the only car maker not […] - - - - - - - -
-
- - -
- -

-

Imagine what would have happened had Martin Winterkorn not imploded, and if Volkswagen, under his watch, had not become a datakraken (data sea-monster, or octopus), spying on drivers and passengers—just like every other car company.

-

What would the world now be like if Volkswagen since 2014 had established itself as the only car maker not operating datakraken? Or, better yet, if Volkswagen became the one car company collecting data for the cars’ owners first—and for insurance companies and advertisers only by the grace of those owners?

-

Volkswagen would be for privacy what Volvo was (and maybe still is) for safety—or that Apple is (or wants to be) for privacy. It would have been a brilliant position for VW.

-

But no. Winterkorn went down, and now Volkswagen is just as bad as the rest of them. Maybe worse:

-


-In October 2014 I posted How Radio Can Defend the Dashboard, sourcing Winterkorn’s speech, and saying “There is already one car company on the customer’s side in this fight: Volkswagen.” The post was written to advise Dash (“the connected car audiotainment conference”), which was about to happen in Detroit. The post created a stir. Everybody I talked to about it at the time was enthused about what I recommended: integrating broadcast signals with the Net, giving collected data to car owners first, switching to the European RDS standard (which would relieve drivers of needing to retune to other signals just to stay with one station), among other ideas.

-

None of that happened. The flywheels of surveillance capitalism were already too big. Apple and Google were about to turn the dashboard into a phone display with CarPlay and Android Auto. Broadcast radio is now a distressed asset, a walking anachronism. It is being eaten alive on the music side by streaming and on the talk side by podcasting.

-

But the bigger thing is that we lost the chance for one big car maker to stake a position on personal privacy. Volkswagen could have done it. But it didn’t. And the datakraken won.

-

For now.

-

 

- - - - -
- - -
- - - -
-
- -
@@ -9195,7 +9766,7 @@

@@ -9271,7 +9842,7 @@

@@ -9401,7 +9972,7 @@

@@ -9477,7 +10048,7 @@

@@ -9553,7 +10124,7 @@

@@ -9677,7 +10248,7 @@

@@ -9745,7 +10316,7 @@

@@ -9813,7 +10384,7 @@

@@ -9907,7 +10478,7 @@

@@ -9981,7 +10552,7 @@

@@ -10053,7 +10624,7 @@

@@ -10129,7 +10700,7 @@

@@ -10205,7 +10776,7 @@

@@ -10284,7 +10855,7 @@

@@ -10356,7 +10927,7 @@

@@ -10432,7 +11003,7 @@

@@ -10679,7 +11250,7 @@

@@ -10715,164 +11286,14 @@

- Web security, symbolized Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive a - - - - -
- - -
- - - - -
- - -
- -Web security, symbolized

Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive as sympathetic or competent, virtually all humans can fall into this trap. Technical experts on Self Sovereign Identity included.

Instead of recognizing that the parrot of Web security is deceased, has gone to meet its maker, is pushing up the daisies, some people keep insisting that it is merely napping, and use trinkets and all kinds of strings and wires to hold it up.

The result is did:tdw, recently rebranded to did:webvh.

Web based DID methods belong to the family of federated identity methods, not Self Sovereign Identity

Using the web for Decentralized Identifiers (DIDs) violates some of the basic principles of Self Sovereign Identity, and effectively restricts the possible properties of the system to that of a classic federated identity protocol, such as OpenID.

Federated identity systems have their uses, and are often “good enough” for usage by large corporations and governments. But they also enable and encourage platform strategies, which has dramatic implications for personal usage, as well as Small and Medium Enterprises (SMEs). The result has been the Surveillance Industry, and a dependency of 95% of our economy on a few, large platform companies.

Self Sovereign Identity has been developed as a concept to break that dependency, and give people control over their own privacy, security and data. Instead, thanks to did:web and its descendants, it increasingly looks like an exercise of putting SSI lipstick on the pig of the federated Web.

You may think this is just hyperbole. So let’s go back to the beginning.

About the principles of SSI

The design goals of Decentralized Identifiers are listed in Section 1.2 of the W3C DID specificaton:

W3C DID: Design goals for Decentralized Identifiers (DID)

So how well do Web based DID methods meet these goals?

All web based methods, including did:web, did:tdw, did:webvh, and any other web based method anyone might ever come up with depend on a domain name pointing to a web server. The method specific identifier is always being transformed into a HTTPS request. The DID to HTTPS Transformation is the same for did:webvh as it is for did:web.

Reaching the correct web server is therefore contingent on access control by the administrator of the web server, the security of the web server, the longevity of the organization operating the web server, the Certificate Authority issuing the certificates identifying the web server, the configuration of the Transport Layer Security (TLS) parameters, and the Domain Name System to identify which web server to contact.

Users have two choices:

Operate their own web server, or Use the web server of some organization that provides them their “decentralized” identifier.

The former is the “let them eat cake” of modern technologies.

Despite many people working for decades to make self-hosting easier and more attractive, self-hosting has been declining. But even if we reverted that trend and enabled and motivated people to self-host with some amazing self-hosting offers: How hard would it be to correlate did:tdw:QmfGEUAcMpzo25kF2Rhn8L5FAXysfGnkzjwdKoNPi615XQ:petermueller.ch to did:tdw:QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n:petermueller.ch ?

How difficult would it be to figure out these might both belong to the same person, whose name might be Peter Müller? Especially considering that the web server at petermueller.ch presents a certificate that lists the owner of the certificate to be a “Peter Müller”, and the whois record for the domain lists his full name, address and phone number?

Which brings us to the second choice, above, which is today’s reality for most people in a federated identity world: Trust the platform intermediary.

How much decentralization is there in Apple Mail? How decentralized are today’s Certificate Authorities? How much privacy and control do users of Gmail have? How secure are today’s web services? How well does today’s world fare in terms of data protection from compromise and loss? How good is today’s Web security?

In reality, Web based DID methods give up on Decentralization, Control, Privacy and Security to the same level that today’s federated identity solutions have given up on them.

They use protocols like OpenID Connect for Verifiable Credentials and Verifiable Presentations (OIDC4VC & OIDC4VP) because they ARE OpenID methods. Which is why if use cases building on top of Web based DIDs were using truth in labelling, they would inform their users about being based on OpenID.

But much of the technology world thrives on buzzwords and hypes, and too often, the technical reality is obfuscated by layers of technical complexity and marketing. So the market rarely penalises false advertising.

did:web(vh), EV edition

Using the Web for “Decentralized” Identifiers and advertising it as revolutionary SSI technology is a bit like selling an “Electric Vehicle” that avoids all the complexities of battery development by using a diesel generator on a towed trailer to power the car. Yes, the propulsion is now electric.

But is the end result fundamentally better than a diesel car?

But what about the added security?

When reading about did:webvh, one could get the impression a lot of security is being added. In reality, it's mostly added complexity because everything goes over a single channel, the same one that is being used by did:web, as well.

It adds security in the same way that web sites get more secure if you ask users to enter not a single password, but three passwords, subsequently, in the correct order.

There is a reason no-one does that. Three passwords are not fundamentally more secure, because there is no additional channel. Add a real second factor, and security actually goes up. Which is why Multi Factor Authentication (MFA) has been invented.

Most likely the Web based DID methods can be developed to the point they will provide actual MFA security at a similar level to today’s federated identity protocols. Maybe did:webvh is even close to that point.

But that only makes it just as secure as “Login with Google”, today. And it does nothing to make it meet the SSI criteria of Decentralization, Control and Privacy.

Perhaps it is time to acknowledge that this parrot is not just a heavy sleeper.

Embrace, Extend, Extinguish

So what’s the problem if some people like did:web and its relatives? As long as we are aware of the limitations, and never use it for systems that are supposed to be used in production by end users or SMEs, there is nothing wrong with did:web.

As I’ve written in a previous article, it’s really useful for rapid prototyping, and can be used as a placeholder during experimentation before switching to a real Decentralized Identifier. We’ve done so ourselves when Vereign has been working on Proof of Concept for the Swiss health sector in 2023. But once we started working on the production system in 2024, we switched to an Autonomous Identifier (AID) that meets the definition of Self Sovereign Identity.

The problem starts when people put Web based identifiers into production.

Not only is it an issue of misleading users with false promises of decentralization, control, privacy and security. It runs much deeper than that. Increasing adoption of Web based identifiers under the moniker of Self Sovereign Identity makes it impossible for actual Self Sovereign Identity to differentiate itself from federated identity protocols. It sucks the air out of the room for actual SSI.

At a technology strategy level, adoption of Web based identifiers makes SSI susceptible to something it was originally designed to prevent: Platform capture.
Depiction of did:web(vh) being welcomed by Self Sovereign Identity community

Whether accidentally or by design, the movement for Web based identifiers perfectly executes a strategy coined by Microsoft in the 90s, labelled Embrace, Extend, Extinguish. I’ve gotten to study that particular script extensively when coordinating the technical and communication activities of the Free Software Foundation Europe around the EU Microsoft antitrust case in order to obtain much needed interoperability information for Samba.

The script is not super complicated. First, become a champion of Self Sovereign Identity, embrace it visibly, participate in the conferences, champion it at the political level. Then come up with ideas to extend it, for instance by proposing to speed up adoption by falling back on “proven”” technologies from the Web. Provided enough Kool-Aid, nobody might notice that it violates the principles of SSI and you’ll find many willing participants.

And lastly, once it has become the dominant flavour to however misleadingly claim the label Self Sovereign Identity, extinguish what is left in terms of actual SSI by aggressively using your economic and political might to push a platform play to suck the air out of the market. While Sovrin had its issues, including political, it undoubtedly lived up to all the SSI principles. Recently, the Sovrin Foundation announced that it was shutting down in March 2025 due to its community moving to the Web.

So, what’s left?

Microsoft had originally championed did:ion, a fully Self Sovereign Identifier based on the Sidetree specification. But as of 2023, it unsurprisingly also switched to did:web. Old habits die hard. Other large tech platforms are also pushing in the same direction, as are several of the former governmental monopolists with strong political ties, such as T-Systems.

The most promising design for a decentralized identifier is the Key Event Receipt Infrastructure (KERI), and at conceptual level it solves some very hard problems that no other method even attempts to address. The problem is how long it has been the promising next thing, without achieving sufficient adoption, and without finding its way into the regulatory documents in the European Union eIDAS (for “electronic IDentification, Authentication and trust Services”) working group, which is strongly pushing in the direction of Web based identifiers.

Unsurprisingly, technical experts have raised security and privacy concerns. In fact, it seems the current draft of the EU Architecture and Reference Framework (ARF) may be in violation of the EU privacy provisions it is supposed to provide.

Also, and it’s already been a topic in the DICE2024 retrospective, KERI is currently available in Python only. Which leaves adoption hamstrung. Not everyone in the KERI community agrees with that, but I’m aware of a number of people and initiatives who would love to adopt KERI, but not in Python. And its completeness as a concept puts the effort required for implementation in another language outside what is feasible for any of these parties individually.

So, when looking at the W3C DID Traits draft, the table looks pretty bleak, with two actual SSI methods left on it: did:key and did:peer. Both limited in relation to quite a few use cases.

What we ended up doing…

We anticipated this picture when designing our use case and solution for the Swiss health sector back in January 2024. The Web identifiers were obvious non-starters, as were did:key and did:peer, due to them being overly limited for our purpose.

We also did not like the idea of putting Python into a mission critical production application for large number of users. Especially since we did not want to put Python on the phone, and also did not want remote wallets that do not actually live on the phone.

So we did what XKCD told us not to do. Stay tuned.

- - - - -
- - -
- - - -
-
- - - - - -

- - - - -
- - - -
- - -

- IdM Laboratory -

- - -

- - - - OpenID for Verifiable Credentials IssuanceのPublic Review期間が始まりました - -

- -
- -
- - - こんにちは、富士榮です。 先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。 https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/ こんなスケジュールです。 Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Imple - - - - -
- - -
- - - -
-
- - -
- -

こんにちは、富士榮です。

先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。



https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/

こんなスケジュールです。

Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Implementer's Draft vote announcement: Monday, January 20, 2025 Implementer's Draft early voting opens: Monday, January 27, 2025 Implementer's Draft official voting period: Monday, February 3 to Tuesday, February 10, 2025


いよいよVerifiable Credentialも社会実装に向けてラストスパートな感じがします。EUDIWも2026年には本格化するわけですし。

- - - - -
- - -
- - - -
-
- - - -
- -
- - - - -

- Saturday, 21. December 2024 -

- - - -
- - -

- IdM Laboratory -

- - -

- - - - ついに発売へ。デジタルアイデンティティのすべて - -

- -
- -
- - - こんにちは、富士榮です。 週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。 こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね! 1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg + Web security, symbolized Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive a
- +
@@ -10886,14 +11307,14 @@

-->
-こんにちは、富士榮です。
週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。

こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e

ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね!

1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg
+Web security, symbolized

Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive as sympathetic or competent, virtually all humans can fall into this trap. Technical experts on Self Sovereign Identity included.

Instead of recognizing that the parrot of Web security is deceased, has gone to meet its maker, is pushing up the daisies, some people keep insisting that it is merely napping, and use trinkets and all kinds of strings and wires to hold it up.

The result is did:tdw, recently rebranded to did:webvh.

Web based DID methods belong to the family of federated identity methods, not Self Sovereign Identity

Using the web for Decentralized Identifiers (DIDs) violates some of the basic principles of Self Sovereign Identity, and effectively restricts the possible properties of the system to that of a classic federated identity protocol, such as OpenID.

Federated identity systems have their uses, and are often “good enough” for usage by large corporations and governments. But they also enable and encourage platform strategies, which has dramatic implications for personal usage, as well as Small and Medium Enterprises (SMEs). The result has been the Surveillance Industry, and a dependency of 95% of our economy on a few, large platform companies.

Self Sovereign Identity has been developed as a concept to break that dependency, and give people control over their own privacy, security and data. Instead, thanks to did:web and its descendants, it increasingly looks like an exercise of putting SSI lipstick on the pig of the federated Web.

You may think this is just hyperbole. So let’s go back to the beginning.

About the principles of SSI

The design goals of Decentralized Identifiers are listed in Section 1.2 of the W3C DID specificaton:

W3C DID: Design goals for Decentralized Identifiers (DID)

So how well do Web based DID methods meet these goals?

All web based methods, including did:web, did:tdw, did:webvh, and any other web based method anyone might ever come up with depend on a domain name pointing to a web server. The method specific identifier is always being transformed into a HTTPS request. The DID to HTTPS Transformation is the same for did:webvh as it is for did:web.

Reaching the correct web server is therefore contingent on access control by the administrator of the web server, the security of the web server, the longevity of the organization operating the web server, the Certificate Authority issuing the certificates identifying the web server, the configuration of the Transport Layer Security (TLS) parameters, and the Domain Name System to identify which web server to contact.

Users have two choices:

Operate their own web server, or Use the web server of some organization that provides them their “decentralized” identifier.

The former is the “let them eat cake” of modern technologies.

Despite many people working for decades to make self-hosting easier and more attractive, self-hosting has been declining. But even if we reverted that trend and enabled and motivated people to self-host with some amazing self-hosting offers: How hard would it be to correlate did:tdw:QmfGEUAcMpzo25kF2Rhn8L5FAXysfGnkzjwdKoNPi615XQ:petermueller.ch to did:tdw:QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n:petermueller.ch ?

How difficult would it be to figure out these might both belong to the same person, whose name might be Peter Müller? Especially considering that the web server at petermueller.ch presents a certificate that lists the owner of the certificate to be a “Peter Müller”, and the whois record for the domain lists his full name, address and phone number?

Which brings us to the second choice, above, which is today’s reality for most people in a federated identity world: Trust the platform intermediary.

How much decentralization is there in Apple Mail? How decentralized are today’s Certificate Authorities? How much privacy and control do users of Gmail have? How secure are today’s web services? How well does today’s world fare in terms of data protection from compromise and loss? How good is today’s Web security?

In reality, Web based DID methods give up on Decentralization, Control, Privacy and Security to the same level that today’s federated identity solutions have given up on them.

They use protocols like OpenID Connect for Verifiable Credentials and Verifiable Presentations (OIDC4VC & OIDC4VP) because they ARE OpenID methods. Which is why if use cases building on top of Web based DIDs were using truth in labelling, they would inform their users about being based on OpenID.

But much of the technology world thrives on buzzwords and hypes, and too often, the technical reality is obfuscated by layers of technical complexity and marketing. So the market rarely penalises false advertising.

did:web(vh), EV edition

Using the Web for “Decentralized” Identifiers and advertising it as revolutionary SSI technology is a bit like selling an “Electric Vehicle” that avoids all the complexities of battery development by using a diesel generator on a towed trailer to power the car. Yes, the propulsion is now electric.

But is the end result fundamentally better than a diesel car?

But what about the added security?

When reading about did:webvh, one could get the impression a lot of security is being added. In reality, it's mostly added complexity because everything goes over a single channel, the same one that is being used by did:web, as well.

It adds security in the same way that web sites get more secure if you ask users to enter not a single password, but three passwords, subsequently, in the correct order.

There is a reason no-one does that. Three passwords are not fundamentally more secure, because there is no additional channel. Add a real second factor, and security actually goes up. Which is why Multi Factor Authentication (MFA) has been invented.

Most likely the Web based DID methods can be developed to the point they will provide actual MFA security at a similar level to today’s federated identity protocols. Maybe did:webvh is even close to that point.

But that only makes it just as secure as “Login with Google”, today. And it does nothing to make it meet the SSI criteria of Decentralization, Control and Privacy.

Perhaps it is time to acknowledge that this parrot is not just a heavy sleeper.

Embrace, Extend, Extinguish

So what’s the problem if some people like did:web and its relatives? As long as we are aware of the limitations, and never use it for systems that are supposed to be used in production by end users or SMEs, there is nothing wrong with did:web.

As I’ve written in a previous article, it’s really useful for rapid prototyping, and can be used as a placeholder during experimentation before switching to a real Decentralized Identifier. We’ve done so ourselves when Vereign has been working on Proof of Concept for the Swiss health sector in 2023. But once we started working on the production system in 2024, we switched to an Autonomous Identifier (AID) that meets the definition of Self Sovereign Identity.

The problem starts when people put Web based identifiers into production.

Not only is it an issue of misleading users with false promises of decentralization, control, privacy and security. It runs much deeper than that. Increasing adoption of Web based identifiers under the moniker of Self Sovereign Identity makes it impossible for actual Self Sovereign Identity to differentiate itself from federated identity protocols. It sucks the air out of the room for actual SSI.

At a technology strategy level, adoption of Web based identifiers makes SSI susceptible to something it was originally designed to prevent: Platform capture.
Depiction of did:web(vh) being welcomed by Self Sovereign Identity community

Whether accidentally or by design, the movement for Web based identifiers perfectly executes a strategy coined by Microsoft in the 90s, labelled Embrace, Extend, Extinguish. I’ve gotten to study that particular script extensively when coordinating the technical and communication activities of the Free Software Foundation Europe around the EU Microsoft antitrust case in order to obtain much needed interoperability information for Samba.

The script is not super complicated. First, become a champion of Self Sovereign Identity, embrace it visibly, participate in the conferences, champion it at the political level. Then come up with ideas to extend it, for instance by proposing to speed up adoption by falling back on “proven”” technologies from the Web. Provided enough Kool-Aid, nobody might notice that it violates the principles of SSI and you’ll find many willing participants.

And lastly, once it has become the dominant flavour to however misleadingly claim the label Self Sovereign Identity, extinguish what is left in terms of actual SSI by aggressively using your economic and political might to push a platform play to suck the air out of the market. While Sovrin had its issues, including political, it undoubtedly lived up to all the SSI principles. Recently, the Sovrin Foundation announced that it was shutting down in March 2025 due to its community moving to the Web.

So, what’s left?

Microsoft had originally championed did:ion, a fully Self Sovereign Identifier based on the Sidetree specification. But as of 2023, it unsurprisingly also switched to did:web. Old habits die hard. Other large tech platforms are also pushing in the same direction, as are several of the former governmental monopolists with strong political ties, such as T-Systems.

The most promising design for a decentralized identifier is the Key Event Receipt Infrastructure (KERI), and at conceptual level it solves some very hard problems that no other method even attempts to address. The problem is how long it has been the promising next thing, without achieving sufficient adoption, and without finding its way into the regulatory documents in the European Union eIDAS (for “electronic IDentification, Authentication and trust Services”) working group, which is strongly pushing in the direction of Web based identifiers.

Unsurprisingly, technical experts have raised security and privacy concerns. In fact, it seems the current draft of the EU Architecture and Reference Framework (ARF) may be in violation of the EU privacy provisions it is supposed to provide.

Also, and it’s already been a topic in the DICE2024 retrospective, KERI is currently available in Python only. Which leaves adoption hamstrung. Not everyone in the KERI community agrees with that, but I’m aware of a number of people and initiatives who would love to adopt KERI, but not in Python. And its completeness as a concept puts the effort required for implementation in another language outside what is feasible for any of these parties individually.

So, when looking at the W3C DID Traits draft, the table looks pretty bleak, with two actual SSI methods left on it: did:key and did:peer. Both limited in relation to quite a few use cases.

What we ended up doing…

We anticipated this picture when designing our use case and solution for the Swiss health sector back in January 2024. The Web identifiers were obvious non-starters, as were did:key and did:peer, due to them being overly limited for our purpose.

We also did not like the idea of putting Python into a mission critical production application for large number of users. Especially since we did not want to put Python on the phone, and also did not want remote wallets that do not actually live on the phone.

So we did what XKCD told us not to do. Stay tuned.

- +
@@ -10902,8 +11323,8 @@

@@ -10913,9 +11334,7 @@

-

- Friday, 20. December 2024 -

+
@@ -10923,7 +11342,7 @@

- Doc Searls Weblog + IdM Laboratory

@@ -10931,7 +11350,7 @@

- Losing (or gaining) a Genius + OpenID for Verifiable Credentials IssuanceのPublic Review期間が始まりました

@@ -10941,12 +11360,16 @@

- Sixteenth in the News Commons series. Dave Askins is shutting down the B Square Bulletin. This is tragic. And not just for Bloomington and Monroe County. (Dave covered the governing bodies of both like a glove.) It’s tragic for journalism. Because Dave is far more than an exemplar of reporting in service to the public. […] + こんにちは、富士榮です。 先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。 https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/ こんなスケジュールです。 Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Imple - - + +
+ + +
+

@@ -10958,74 +11381,14 @@

-->
-

Sixteenth in the News Commons series.

-

-

Dave Askins is shutting down the B Square Bulletin.

-

This is tragic. And not just for Bloomington and Monroe County. (Dave covered the governing bodies of both like a glove.) It’s tragic for journalism. Because Dave is far more than an exemplar of reporting in service to the public. He is a genius-grade source of ideas for institutionalizing local journalism in ways that can be sustained and enlarged, constantly. His ideas are simple, comprehensive, easy to implement, and not found anywhere else—yet.

-

You can read about them in most of the pieces I’ve written in the News Commons series:

- - We Need Deep News (18 August 2023) - We Need Wide News (30 August 2023) - We Need Whole News (15 September 2023) - Stories vs. Facts (12 October 2023) - Deeper News (20 October 2023) - DatePress (9 November 2023) - The Online Local Chronicle (19 March 2024) - Archives as Commons (21 April 2024) - The Future, Past, and Present of News (30 June 2024) - A Better Way to Do News (16 August 2024) - -

Specifically, Dave’s ideas and inventions include—

- - The Big Calendar: a master calendar made entirely by feeds from every other calendar in the county, plus AI scrapings of unstructured data (such as in posters), turned into entries through AI. While the Big Calendar is still in the B Square Bulletin, it can move anywhere. (I think WFHB should take it over. They can stick it in the menu at the top of their web page.) - DatePress: a radically new way to do community calendars. WordPress or any of the many WordPress tool, plugin, and extension makers could take this project on, but WordPress itself is best positioned to do it best. - BloomDocs: a public document repository that serves as an essential resource for journalists, residents, elected officials, and government staff. As Dave put it, “Look for it on BloomDocs” (or the equivalent for any region) should be a common answer to the question, “Where can I get a copy of that document?” - The Online Local Chronicle: one model is a wiki that Dave started here. - A whole new and comprehensive way to flow news: from Future (calendars) to Present (today’s stories) to Past (archives and chronicles): an approach that respects the need for facts, and not just for stories—facts that may prove useful years, decades, or centuries in the future. - -

I unpack these in A Better Way to Do News.

-

As for what’s next, look at how saving local journalism is a Big Thing for many philanthropies. These include—

- - Press Forward, (which I first learned about from my old friend John Palfrey, who now runs the MacArthur Foundation) - craig newmark philanthropies - American Journalism Project - Bloomberg Philanthropies - Knight Foundation - Joyce Foundation - Report for America - City Bureau - Media Impact Funders - Google News Initiative - Emerson Collective - The Pivot Fund - The Funders Network - Community news funds of many kinds - Local Media Association - Community Foundation of Bloomington and Monroe Counties - All the other journalism support organizations listed by LION Publishers - -

An interesting thing about Dave is that he has always been about advancing local journalism, not just “saving” it.

-

No journalist is better qualified for funding toward advancing local journalism than Dave. I hope one (or more!) of the entities above reaches out, either to Dave (through the B Square while it’s still up) or through me. To any of those entities who might be reading this, don’t think of Dave as an applicant for a grant. Look at him the way a director casts a play or a movie: as the best performer for a leading role.

-

It would be great if Bloomington’s loss became journalism’s gain.

- -

Bonus links:

-

We’re sorry to see B Square Bulletin close, by Jeremy Hogan in The Bloomingtonian.

-

Local news site B Square Bulletin shutting down, by Ethan Sandweiss at Indiana Public Media (WFIU radio and WTIU television) Excerpts:

-

Earlier this month, Bloom Magazine told Askins it would present him an award at its annual holiday party on Dec. 19 for his contributions to local journalism. He decided then to announce the end of B Square the next day. Onstage that night at The Woolery Mill, Askins and founder of The Bloomingtonian, Jeremy Hogan, received awards from Bloom Magazine editor-in-chief Malcolm Abrams…

-

Other outlets in Bloomington continue to cover local government, including WFIU/WTIU News (which recently filled a vacancy for a full-time government reporter), The Herald-Times and the Indiana Daily Student.

-

‘Not the way I wanted to wake up’: Bloomington news website ceases publication, by Boris Ladwig in the Herald-Times. Excerpt:

-

Local government officials on Friday lamented the B Square’s demise.

-

Mayor Kerry Thomson said via email the community “owes Dave a debt of gratitude.”

-

She said Dave has asked the important questions, “which only come from very careful observation and depth of research.

-

“In the year I have been leading the city, he has certainly helped me be a better leader, and for many of us he has planted the seeds of how to serve the people of Bloomington better,” Thomson said.

-

Bloomington City Council member Isak Asare said on Facebook the B Square’s demise left him feeling sad.

+

こんにちは、富士榮です。

先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。



https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/

こんなスケジュールです。

Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Implementer's Draft vote announcement: Monday, January 20, 2025 Implementer's Draft early voting opens: Monday, January 27, 2025 Implementer's Draft official voting period: Monday, February 3 to Tuesday, February 10, 2025


いよいよVerifiable Credentialも社会実装に向けてラストスパートな感じがします。EUDIWも2026年には本格化するわけですし。

- +
@@ -11034,7 +11397,7 @@

@@ -11045,7 +11408,9 @@

-
+

+ Saturday, 21. December 2024 +

@@ -11053,7 +11418,7 @@

- Heres Tom with the Weather + IdM Laboratory

@@ -11061,7 +11426,7 @@

- No Water is Death + ついに発売へ。デジタルアイデンティティのすべて

@@ -11071,18 +11436,16 @@

- - - “Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water - - AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza? - - BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have no + こんにちは、富士榮です。 週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。 こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね! 1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg - - + +
+ + +
+ @@ -11094,23 +11457,23 @@

-->
- - -

“Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water

- -

AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza?

- -

BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have not done this very often in our history. We accused the Myanmar military of genocidal acts against the Rohingya in 2017, and we found full-blown genocide against the Kurds in Saddam Hussein’s Anfal campaign in Iraq in the ’80s — sorry, in the ’90s, and we found genocide against — also in Rwanda in the ’80s. It is, you know, an extremely difficult crime to prove. It is, you know, mass killing deliberately to destroy people because they’re part of the group, not something we level lightly, but, yes, we found it here.

+こんにちは、富士榮です。
週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。

こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e

ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね!

1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg
+ +
+ + +
+
@@ -11121,7 +11484,9 @@

-
+

+ Friday, 20. December 2024 +

@@ -11129,7 +11494,7 @@

- Ben Werdmüller + Heres Tom with the Weather

@@ -11137,7 +11502,7 @@

- Meta Contributes to 178K EUR to OpenStreetMap + No Water is Death

@@ -11147,9 +11512,13 @@

- - - [OpenStreetMap] Meta has contributed 178,710 Euros (an oddly specific number!) to OpenStreetMap. On one level: hooray for people contributing to open source. On another: Meta has a $1.5 Trillion market cap and uses OpenStreetMap in multiple applications. To be fair, it also provides direct non-monetary contributions, but regardless, when all is said and done, it's a bargain. + + + “Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water + + AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza? + + BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have no @@ -11166,13 +11535,13 @@

-->
- - -

[OpenStreetMap]

Meta has contributed 178,710 Euros (an oddly specific number!) to OpenStreetMap.

On one level: hooray for people contributing to open source.

On another: Meta has a $1.5 Trillion market cap and uses OpenStreetMap in multiple applications. To be fair, it also provides direct non-monetary contributions, but regardless, when all is said and done, it's a bargain. Arguably, the open source project deserves much more. And it's really sad that a donation at this level from a major beneficiary of the project is so exciting that it merits a blog post.

-

#Technology

-

[Link]

- - + + +

“Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water

+ +

AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza?

+ +

BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have not done this very often in our history. We accused the Myanmar military of genocidal acts against the Rohingya in 2017, and we found full-blown genocide against the Kurds in Saddam Hussein’s Anfal campaign in Iraq in the ’80s — sorry, in the ’90s, and we found genocide against — also in Rwanda in the ’80s. It is, you know, an extremely difficult crime to prove. It is, you know, mass killing deliberately to destroy people because they’re part of the group, not something we level lightly, but, yes, we found it here.

@@ -11182,8 +11551,8 @@

@@ -11363,79 +11732,7 @@

- - - -

- - - - -
- - - -
- - -

- Ben Werdmüller -

- - -

- - - - Companies issuing RTO mandates “lose their best talent”: Study - -

- -
- -
- - - - - [Scharon Harding at Ars Technica] From the "gee, you don't say" department: "Return-to-office (RTO) mandates have caused companies to lose some of their best workers, a study tracking over 3 million workers at 54 "high-tech and financial" firms at the S&P 500 index has found. These companies also have greater challenges finding new talent, the report concluded." The st - - - - - - - -
-
- - -
- - - -

[Scharon Harding at Ars Technica]

From the "gee, you don't say" department:

"Return-to-office (RTO) mandates have caused companies to lose some of their best workers, a study tracking over 3 million workers at 54 "high-tech and financial" firms at the S&P 500 index has found. These companies also have greater challenges finding new talent, the report concluded."

The study finds that RTO policies increased turnover rates by 14% - although, of course, in many cases that was part of the point, as a kind of quiet layoff that didn't involve the same level of bad press or the financial commitments to departing employees. (As part of the study, 25% of executives admitted to this. Which is a lot!)

The study also calls out that RTO rules convey "a culture of distrust that encourages management through monitoring," which is spot on - and nobody wants to feel like they're being surveilled or treated like children.

Don't get me wrong: I love coming into the office from time to time. But RTO policies - at least for most knowledge workers - are an employee-hostile policy.

-

#Labor

-

[Link]

- - - - - - - -
-
- -
@@ -11511,7 +11808,7 @@

@@ -11581,7 +11878,7 @@

@@ -11657,7 +11954,7 @@

@@ -11721,7 +12018,7 @@

@@ -11897,7 +12194,7 @@

@@ -11942,81 +12239,7 @@

- - - - -
- - -
- -

こんにちは、富士榮です。

FAPI2.0のSecurity Profile and Attacker Modelに関する仕様の最終化に関するPublic Review期間が始まっていますね。

https://openid.net/public-review-for-proposed-final-fapi-2-0-specifications/



今後はこんなスケジュールで進むようです。

Final Specification public review period: Monday, December 9, 2024 to Friday, February 7, 2025 (60 days) Final Specification vote announcement: Saturday, January 25, 2025 Final Specification early voting opens: Saturday, February 1, 2025 Final Specification voting period: Saturday, February 8, 2024 to Saturday, February 15, 2025 (7 days)


いよいよFAPIも本格化ですね。

- - - - -
- - -
- - - -
-
- - - - - -

- - - - -

- Tuesday, 17. December 2024 -

- - - -
- - -

- Ben Werdmüller -

- - -

- - - - Hello, Social Web 👋🏼 - -

- -
- -
- - - - - [A New Social] I'm psyched about this announcement: "We're A New Social, a new non-profit organization focused on building cross-protocol tools and services for the open social web. [...] The first project we'll take on to accomplish this mission is Bridgy Fed, a service that enables users of ActivityPub-based platforms like Mastodon, ATProto-based platforms like Bluesky, a - - - - - +
@@ -12028,24 +12251,24 @@

-->
- - -

[A New Social]

I'm psyched about this announcement:

"We're A New Social, a new non-profit organization focused on building cross-protocol tools and services for the open social web.

[...] The first project we'll take on to accomplish this mission is Bridgy Fed, a service that enables users of ActivityPub-based platforms like Mastodon, ATProto-based platforms like Bluesky, and websites to interact and engage across ecosystems."

In other words, A New Social is a non-profit that is kicking off with supporting the long-standing Bridgy project but isn't stopping there. The idea is that we'll all be sharing and communicating on one social web, even if there are a variety of underlying protocols powering it all. Bridgy, of course, helps bridge between social networks. But there's a lot more to do, which is why the non-profit is talking about collaborating with orgs like The Social Web Foundation and IFTAS.

The CEO is Anuj Ahooja, who has been doing wonderful work across decentralized social; he joins Ryan Barrett, who has been developing Bridgy for years and years. I can't wait to see what they do together.

Like I said, I'm psyched.

-

#Fediverse

-

[Link]

- - +

こんにちは、富士榮です。

FAPI2.0のSecurity Profile and Attacker Modelに関する仕様の最終化に関するPublic Review期間が始まっていますね。

https://openid.net/public-review-for-proposed-final-fapi-2-0-specifications/



今後はこんなスケジュールで進むようです。

Final Specification public review period: Monday, December 9, 2024 to Friday, February 7, 2025 (60 days) Final Specification vote announcement: Saturday, January 25, 2025 Final Specification early voting opens: Saturday, February 1, 2025 Final Specification voting period: Saturday, February 8, 2024 to Saturday, February 15, 2025 (7 days)


いよいよFAPIも本格化ですね。

+ +
+ + +
+

@@ -12055,7 +12278,9 @@

-
+

+ Tuesday, 17. December 2024 +

@@ -12120,7 +12345,7 @@

@@ -12196,7 +12421,7 @@

@@ -12272,7 +12497,7 @@

@@ -12338,7 +12563,7 @@

@@ -12414,7 +12639,7 @@

@@ -12490,7 +12715,7 @@

@@ -12566,7 +12791,7 @@

@@ -12642,7 +12867,7 @@

@@ -12706,7 +12931,7 @@

@@ -12775,7 +13000,7 @@

@@ -13251,7 +13476,7 @@

- This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0. Client code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID supports federation using OpenID Connect and was […] + This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0. Client code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID supports federation using OpenID Connect and was … … Co @@ -13612,80 +13837,6 @@

-
- - -

- IdM Laboratory -

- - -

- - - - 2024年のGartner Magic Quadrant(アクセス管理分野)が発表されています - -

- -
- -
- - - こんにちは、富士榮です。 この領域にいるとよくマーケティングなどで使われるのがガートナー社が出しているハイプサイクルやマジック・クァドラントです。今回はマジック・クァドラントですが、これは毎年アクセス管理をはじめ、様々な分野で発表されている各社のサービスが当該領域でどのようなポジション(リーダーなのかチャレンジャーなのか、など)に位置するのかを評価したものです。 今回はアクセス管理領域について発表されたので、リーダー領域に位置する各社がプレスを出しています。 出典)ガートナー 今回リーダーと位置付けられているのは、Microsoft、Okta、Ping Identityですね。 各社プレスを出しています。 Microsoft https://www.microsoft.com/en-us/security/blog/2024/12/05/8-years-as-a- - - - - -
- - -
- - - -
-
- - -
- -

こんにちは、富士榮です。

この領域にいるとよくマーケティングなどで使われるのがガートナー社が出しているハイプサイクルやマジック・クァドラントです。今回はマジック・クァドラントですが、これは毎年アクセス管理をはじめ、様々な分野で発表されている各社のサービスが当該領域でどのようなポジション(リーダーなのかチャレンジャーなのか、など)に位置するのかを評価したものです。

今回はアクセス管理領域について発表されたので、リーダー領域に位置する各社がプレスを出しています。


出典)ガートナー


今回リーダーと位置付けられているのは、Microsoft、Okta、Ping Identityですね。

各社プレスを出しています。

Microsoft

https://www.microsoft.com/en-us/security/blog/2024/12/05/8-years-as-a-leader-in-the-gartner-magic-quadrant-for-access-management/

Okta

https://www.okta.com/jp/resources/gartner-magic-quadrant-access-management/

Ping Identity

https://www.pingidentity.com/en/gartner-magic-quadrant-access-management.html


必ずしも自社課題に適合するかどうかは別の問題ですが、大きな流れとしては理解しておいた方が良いでしょう。




- - - - -
- - -
- - - -
-
- - - -
- -
- - - - -
- - -
@@ -14406,7 +14557,7 @@

- This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience. Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment I develop using Visual Studio and manage multiple accounts and test environments. […] + This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience. Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment I develop using Visual Studio and manage multiple accounts and test environments. … … @@ -14647,19 +14798,19 @@

-
Securing Azure Functions using an Azure Virtual Network
+
Securing Azure Functions using an Azure Virtual Network
-
Using Key Vault and Managed Identities with Azure Functions
+
Using Key Vault and Managed Identities with Azure Functions
-
Using Azure Key Vault with ASP.NET Core and Azure App Services
+
Using Azure Key Vault with ASP.NET Core and Azure App Services
@@ -15603,7 +15754,7 @@

@@ -15709,7 +15860,7 @@

@@ -15781,7 +15932,7 @@

@@ -17255,7 +17406,7 @@

- This article shows how to implement a secure web application using Vue.js and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stacks as one web application. HTTP only secure cookies are used to persist the session. OpenIddict is used as the identity provider and the token […] + This article shows how to implement a secure web application using Vue.js and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stacks as one web application. HTTP only secure cookies are used to persist the session. OpenIddict is used as the identity provider and the token … … Continue reading → @@ -19530,7 +19681,7 @@

- This article demonstrates how to implement a downstream API protected by certificate authentication using Microsoft YARP reverse proxy in an ASP.NET Core web application. The application uses Angular for its UI and secures both the UI and the ASP.NET Core backend through a backend-for-frontend security architecture. The downstream API is secured with certificate authentication and […] + This article demonstrates how to implement a downstream API protected by certificate authentication using Microsoft YARP reverse proxy in an ASP.NET Core web application. The application uses Angular for its UI and secures both the UI and the ASP.NET Core backend through a backend-for-frontend security architecture. The downstream API is secured with certificate authentication and … … Continue read @@ -20625,7 +20776,7 @@

@@ -20699,7 +20850,7 @@

@@ -20775,82 +20926,6 @@

-
- - - - - - - -

Friday, 25. October 2024

@@ -21103,7 +21178,7 @@

- This article shows how to implement security headers for an application supporting an API and a swagger UI created from a open API in .NET 9. The security headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders Nuget packages from Andrew Lock. Code: https://github.com/damienbod/WebApiOpenApi Deploying a web application which supports both an API and a UI have different […] + This article shows how to implement security headers for an application supporting an API and a swagger UI created from a open API in .NET 9. The security headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders Nuget packages from Andrew Lock. Code: https://github.com/damienbod/WebApiOpenApi Deploying a web application which supports both an API and a UI have different … … Continu @@ -21322,7 +21397,7 @@

-
Improving application security in an ASP.NET Core API using HTTP headers – Part 3
+
Improving application security in an ASP.NET Core API using HTTP headers – Part 3
@@ -22553,7 +22628,7 @@

- This article looks at the different setups when using App-to-App security with Microsoft Entra ID (OAuth client credentials). Microsoft Entra App registrations are used to configure the OAuth clients and resources. For each tenant, an Enterprise application is created for the client App registration when the consent is granted. The claims in the access token […] + This article looks at the different setups when using App-to-App security with Microsoft Entra ID (OAuth client credentials). Microsoft Entra App registrations are used to configure the OAuth clients and resources. For each tenant, an Enterprise application is created for the client App registration when the consent is granted. The claims in the access token … … Continue reading → @@ -23602,7 +23677,7 @@

- This article shows how to implement a geo location search in an ASP.NET Core application using a LeafletJs map. The selected location can be used to find the nearest location with an Elasticsearch Geo-distance query. The Elasticsearch container and the ASP.NET Core UI application are setup for development using .NET Aspire. Code: https://github.com/damienbod/WebGeoElasticsearch Setup For […] + This article shows how to implement a geo location search in an ASP.NET Core application using a LeafletJs map. The selected location can be used to find the nearest location with an Elasticsearch Geo-distance query. The Elasticsearch container and the ASP.NET Core UI application are setup for development using .NET Aspire. Code: https://github.com/damienbod/WebGeoElasticsearch Setup For … … Contin @@ -23883,7 +23958,7 @@

-
Using Elasticsearch with .NET Aspire
+
Using Elasticsearch with .NET Aspire
@@ -25575,82 +25650,6 @@

-

- Thursday, 19. September 2024 -

- - - -
- - -

- Michael Ruminer -

- - -

- - - - A Great AI RAG Resource - -

- -
- -
- - - I came across a great AI Retrieval Augmented Generation resource. It is a Github repo: Advanced RAG Techniques: Elevating Your Retrieval-Augmented Generation Systems.I’ll just copy and paste their introduction here. “Welcome to one of the most comprehensive and dynamic collections of Retrieval-Augmented Generation (RAG) tutorials available today. This repository serves as a hub for cutting-edge t - - - - -
- - -
- - - -
-
- - -
- -

I came across a great AI Retrieval Augmented Generation resource.
It is a Github repo: Advanced RAG Techniques: Elevating Your Retrieval-Augmented Generation Systems.I’ll just copy and paste their introduction here.

“Welcome to one of the most comprehensive and dynamic collections of Retrieval-Augmented Generation (RAG) tutorials available today. This repository serves as a hub for cutting-edge techniques aimed at enhancing the accuracy, efficiency, and contextual richness of RAG systems.”

All I can say is, wow. It really covers a lot of ground. I plan to dig into it and will report back.

- - - - -
- - -
- - - -
-
- - - -
- -
- - - -

Monday, 16. September 2024

@@ -25753,7 +25752,7 @@

- This post shows how to use Elasticsearch in .NET Aspire. Elasticsearch is setup to use HTTPS with the dotnet developer certificates and and simple client can be implemented to query the data. Code: https://github.com/damienbod/keycloak-backchannel Setup Two services are setup to run in .NET Aspire. The first service is the official Elasticsearch docker container and deployed […] + This post shows how to use Elasticsearch in .NET Aspire. Elasticsearch is setup to use HTTPS with the dotnet developer certificates and and simple client can be implemented to query the data. Code: https://github.com/damienbod/keycloak-backchannel Setup Two services are setup to run in .NET Aspire. The first service is the official Elasticsearch docker container and deployed … … Continue reading → @@ -26402,7 +26401,7 @@

- This post shows how to implement an OpenID Connect back-channel logout using Keycloak, ASP.NET Core and .NET Aspire. The Keycloak and the Redis cache are run as containers using .NET Aspire. Two ASP.NET Core UI applications are used to demonstrate the server logout. Code: https://github.com/damienbod/keycloak-backchannel Setup The applications are run and tested using .NET Aspire. […] + This post shows how to implement an OpenID Connect back-channel logout using Keycloak, ASP.NET Core and .NET Aspire. The Keycloak and the Redis cache are run as containers using .NET Aspire. Two ASP.NET Core UI applications are used to demonstrate the server logout. Code: https://github.com/damienbod/keycloak-backchannel Setup The applications are run and tested using .NET Aspire. … … Continue read diff --git a/docs/overview.html b/docs/overview.html index 2d892d97d..8a79b95e3 100755 --- a/docs/overview.html +++ b/docs/overview.html @@ -52,7 +52,7 @@ -Last Update 6:47 AM January 11, 2025 (UTC) +Last Update 6:47 AM January 12, 2025 (UTC)

Identosphere Blog Catcher - Latest Headlines

@@ -73,7 +73,7 @@

Identosphere Blog Catcher - Latest Headlines

-

Doc Searls Weblog

+

Simon Willison

- Jan 11 + Jan 12 @@ -103,14 +108,16 @@

Doc Searls Weblog

- Jan 10 + Jan 11 @@ -120,12 +127,19 @@

Doc Searls Weblog

- Jan 09 + Jan 11 @@ -137,14 +151,19 @@

Doc Searls Weblog

- Jan 08 + Jan 10 @@ -156,7 +175,7 @@

Doc Searls Weblog

- Palisades Fire on the Ridge + Generative AI – The Power and the Glory - 24 minutes ago + 5 hours ago
- The Los Angeles Media Dashboard + Agents - 12 hours ago + 13 hours ago
- Los Angeles Fires and Aftermath + Phi-4 Bug Fixes by Unsloth a day ago - On Los Angeles Wildfires + My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends - 3 days ago + 2 days ago
- Jan 10 + Jan 11 @@ -186,14 +205,14 @@

Wrench in the Gears

- Jan 01 + Jan 07 @@ -203,14 +222,14 @@

Wrench in the Gears

- Dec 03 + Dec 31 @@ -220,14 +239,14 @@

Wrench in the Gears

- Dec 01 + Dec 31 @@ -239,7 +258,7 @@

Wrench in the Gears

- How Gregg Braden Led Me To Share My Thoughts on Quantum Coherence, Collective Computing, and Tokenized Spirituality + Building an open web that protects us from harm - a day ago + 15 hours ago
- Wishing All A Resonant New Year – May We Have The Blessings Of New Learnings And Chances To Grow In 2025 + 46 books - 10 days ago + 4 days ago
- Review of “Wicked” The Movie – Prophecy, Quantum Dance, Self-Actualization, and the Tao in Pink and Green + Tintin and the fascists - 1 months ago + 12 days ago
- Is Informed Consent Even Possible When Most Agents Have No Interest In Building Their Own Scaffold? + Predictions for tech, 2025 - 1 months ago + 12 days ago
- Jan 09 + Jan 10 @@ -274,10 +293,9 @@

Just a Theory

@@ -287,15 +305,14 @@

Just a Theory

- Jan 01 + Dec 03 @@ -305,15 +322,14 @@

Just a Theory

- Dec 31 + Dec 01 @@ -325,7 +341,7 @@

Just a Theory

- Sqitch 1.5.0 + How Gregg Braden Led Me To Share My Thoughts on Quantum Coherence, Collective Computing, and Tokenized Spirituality 2 days ago - Should URI::mysql Switch to DBD::MariaDB? + Wishing All A Resonant New Year – May We Have The Blessings Of New Learnings And Chances To Grow In 2025 - 9 days ago + 11 days ago
- New JSONPath Feature: SelectLocated + Review of “Wicked” The Movie – Prophecy, Quantum Dance, Self-Actualization, and the Tao in Pink and Green - 9 days ago + 1 months ago
- SQL/JSON Path Playground Update + Is Informed Consent Even Possible When Most Agents Have No Interest In Building Their Own Scaffold? - 10 days ago + 1 months ago
- Jan 07 + Jan 09 @@ -355,14 +371,15 @@

Werdmüller on Medium

- Dec 31 + Jan 01 @@ -372,14 +389,15 @@

Werdmüller on Medium

- Dec 31 + Jan 01 @@ -389,14 +407,15 @@

Werdmüller on Medium

- Dec 27 + Dec 31 @@ -408,7 +427,7 @@

Werdmüller on Medium

- 46 books + Sqitch 1.5.0 3 days ago - Tintin and the fascists + Should URI::mysql Switch to DBD::MariaDB? - 11 days ago + 10 days ago
- Predictions for tech, 2025 + New JSONPath Feature: SelectLocated - 11 days ago + 10 days ago
- Creating a framework for living well + SQL/JSON Path Playground Update - 15 days ago + 11 days ago
- Jan 03 + Jan 06 @@ -438,14 +457,21 @@

Mike Jones: self-issued

- Dec 07 + Jan 06 @@ -455,14 +481,16 @@

Mike Jones: self-issued

- Dec 01 + Jan 06 @@ -472,14 +500,15 @@

Mike Jones: self-issued

- Oct 30 + Jan 06 @@ -511,7 +540,7 @@

@_Nat Zone

世界のデジタルIDと認証技術の最新トレンド:12月28日版 - 14 days ago + 15 days ago @@ -528,7 +557,7 @@

@_Nat Zone

OpenID Foundatiion の理事選挙(2025)に立候補しました - 22 days ago + 23 days ago @@ -545,7 +574,7 @@

@_Nat Zone

JIPDEC、トラステッド・サービス登録(電子契約サービス)の登録基準を作成し、公開 - 24 days ago + 25 days ago @@ -592,7 +621,7 @@

Damien Bod

@@ -609,7 +638,7 @@

Damien Bod

@@ -626,7 +655,7 @@

Damien Bod

@@ -643,7 +672,7 @@

Damien Bod

@@ -711,7 +740,7 @@

Aaron Parecki

OAuth Oh Yeah! - 4 months ago + 5 months ago @@ -3772,7 +3801,7 @@

Supply...., meet Demand

- Proposed Second Candidate Recommendation for Securing Verifiable Credentials using JOSE and COSE + - 8 days ago + 6 days ago
- Integrity Properties for Federations + 🔗🎙️… on a purge roll. Good podcasts. +Time Is Way Weird - 1 months ago + 6 days ago
- Three New Specs Enhancing OpenID Federation and New Contributors + Cloud Station ⚭ KN#33 travel center +🔗 📼 This one is AI - but - 1 months ago + 6 days ago
- OpenID Presentations at October 2024 OpenID Workshop and IIW plus New Specifications + 🔗 📼 Live SteamPunk - Germany - Posted 5 Years ago - and I onl - 2 months ago + 6 days ago
- Using Entra External ID with an Auth0 OpenID Connect identity provider + Using Entra External ID with an Auth0 OpenID Connect identity provider 1 months ago - Using ASP.NET Core with Azure Key Vault + Using ASP.NET Core with Azure Key Vault 1 months ago - ASP.NET Core BFF using OpenID Connect and Vue.js + ASP.NET Core BFF using OpenID Connect and Vue.js 2 months ago - ASP.NET Core and Angular BFF using a YARP downstream API protected using certificate authentication + ASP.NET Core and Angular BFF using a YARP downstream API protected using certificate authentication 2 months ago
- Jan 11 + Jan 12 @@ -3809,19 +3831,14 @@

Simon Willison

- Jan 10 + Jan 01 @@ -3831,17 +3848,16 @@

Simon Willison

- Jan 09 + Dec 30 @@ -3851,18 +3867,14 @@

Simon Willison

- Jan 08 + Dec 29 @@ -3874,7 +3886,7 @@

Simon Willison

- Phi-4 Bug Fixes by Unsloth + ECDSAに対応したゼロ知識証明の論文がGoogleから出ています 6 hours ago - My AI/LLM predictions for the next 1, 3 and 6 years, for Oxide and Friends + Intention Economyその後 - a day ago + 11 days ago
- Double-keyed Caching: How Browser Cache Partitioning Changed the Web + 366/366 !!! - a day ago + 12 days ago
- microsoft/phi-4 + AAMVAのMobile Drivers License Implementation Guidelinesを読む⑧ - 3 days ago + 13 days ago
- Jan 09 + Jan 11 @@ -3904,14 +3916,14 @@

Phil Windleys Technometria

- Nov 08 + Jan 11 @@ -3921,14 +3933,14 @@

Phil Windleys Technometria

- Oct 28 + Jan 11 @@ -3938,14 +3950,14 @@

Phil Windleys Technometria

- Aug 28 + Jan 10 @@ -3957,7 +3969,7 @@

Phil Windleys Technometria

- Authorization Matters + Building an open web that protects us from harm - a day ago + 15 hours ago
- Internet Identity Workshop XXXIX Report + The Good, The Bad, And The Stupid In Meta’s New Content Moderation Policies - 2 months ago + 16 hours ago
- Is Voting Secure? + Mullenweg Shuts Down WordPress Sustainability Team, Igniting Backlash - 2 months ago + 16 hours ago
- Digital Identity and Access Control + Indonesia kicks off ambitious $45b free meal plan - 5 months ago + a day ago
- Jan 08 + Jan 09 @@ -3991,18 +3999,14 @@

Heres Tom with the Weather< - Dec 20 + Nov 08

@@ -4012,14 +4016,14 @@

Heres Tom with the Weather< - Nov 30 + Oct 28

@@ -4029,16 +4033,14 @@

Heres Tom with the Weather< - Nov 23 + Aug 28

@@ -4050,7 +4052,7 @@

Heres Tom with the Weather<

- Hockey Trivia + Authorization Matters - 3 days ago + 2 days ago
- No Water is Death + Internet Identity Workshop XXXIX Report - 22 days ago + 2 months ago
- Austin Indieweb at Radio Coffee + Is Voting Secure? - 1 months ago + 3 months ago
- RIP Toni Price + Digital Identity and Access Control - 2 months ago + 5 months ago
- Jan 06 + Jan 08 @@ -4080,21 +4086,18 @@

John Philpin : Lifestream

- Jan 06 + Dec 20 @@ -4104,16 +4107,14 @@

John Philpin : Lifestream

- Jan 06 + Nov 30 @@ -4123,15 +4124,16 @@

John Philpin : Lifestream

- Jan 06 + Nov 23 @@ -4143,7 +4145,7 @@

John Philpin : Lifestream

- + Hockey Trivia - 5 days ago + 4 days ago
- 🔗🎙️… on a purge roll. Good podcasts. -Time Is Way Weird + No Water is Death - 5 days ago + 23 days ago
- Cloud Station ⚭ KN#33 travel center -🔗 📼 This one is AI - but + Austin Indieweb at Radio Coffee - 5 days ago + 1 months ago
- 🔗 📼 Live SteamPunk - Germany - Posted 5 Years ago - and I onl + RIP Toni Price - 5 days ago + 2 months ago
- Jan 01 + Jan 03 @@ -4173,16 +4175,14 @@

IdM Laboratory

- Dec 30 + Dec 07 @@ -4192,14 +4192,14 @@

IdM Laboratory

- Dec 29 + Dec 01 @@ -4209,14 +4209,14 @@

IdM Laboratory

- Dec 28 + Oct 30 @@ -4248,7 +4248,7 @@

Hyperonomy Digital Identity Lab

Web 7.0 Foundation: SDO Accreditation - 15 days ago + 16 days ago @@ -4265,7 +4265,7 @@

Hyperonomy Digital Identity Lab

Building a Knowledge Graph from Wikipedia in Neo4j - 26 days ago + 27 days ago @@ -4282,7 +4282,7 @@

Hyperonomy Digital Identity Lab

Toronto Songwriter/Performer Use Case: DID Method Candidates - 26 days ago + 27 days ago @@ -4331,7 +4331,7 @@

Talking Identity

Broadening the Definition of Identity Practitioner - 1 months ago + 2 months ago @@ -7457,7 +7457,7 @@

Joi Podgorny

- Intention Economyその後 + Proposed Second Candidate Recommendation for Securing Verifiable Credentials using JOSE and COSE - 10 days ago + 9 days ago
- 366/366 !!! + Integrity Properties for Federations - 11 days ago + 1 months ago
- AAMVAのMobile Drivers License Implementation Guidelinesを読む⑧ + Three New Specs Enhancing OpenID Federation and New Contributors - 12 days ago + 1 months ago
- AAMVAのMobile Drivers License Implementation Guidelinesを読む⑦ + OpenID Presentations at October 2024 OpenID Workshop and IIW plus New Specifications - 13 days ago + 2 months ago
- Jan 10 + Jan 11 @@ -7487,14 +7487,14 @@

Ben Werdmüller

- Jan 08 + Jan 11 @@ -7504,14 +7504,14 @@

Ben Werdmüller

- Jan 08 + Jan 11 @@ -7521,14 +7521,14 @@

Ben Werdmüller

- Jan 07 + Jan 10 @@ -7540,7 +7540,7 @@

Ben Werdmüller

- Indonesia kicks off ambitious $45b free meal plan + What Are Stories? - 16 hours ago + 7 hours ago
- Meta’s Free Speech Grift + Aviation vs. Fire - 3 days ago + 14 hours ago
- Heritage Foundation plans to ‘identify and target’ Wikipedia editors + Palisades Fire on the Ridge - 3 days ago + a day ago
- 46 books + The Los Angeles Media Dashboard - 3 days ago + a day ago
- Jan 09 + Jan 11 @@ -7570,14 +7570,14 @@

The Pragmatic Engineer - Jan 08 + Jan 04

@@ -7587,14 +7587,14 @@

The Pragmatic Engineer - Jan 07 + Nov 04

@@ -7604,14 +7604,14 @@

The Pragmatic Engineer - Jan 05 + Nov 01

@@ -7623,7 +7623,7 @@

The Pragmatic Engineer -

Moxy Tongue

+

The Pragmatic Engineer

- The Pulse #119: Are LLMs making StackOverflow irrelevant? + AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b - a day ago + 16 hours ago
- Confessions of a Big Tech recruiter + Using LlamaIndex Part 1 — OpenAI - 3 days ago + 7 days ago
- Bug management that works (Part 2) + Agents Craft Hackathon and Inspiration Block - 4 days ago + 2 months ago
- How AI-assisted coding will change software engineering: hard truths + VSCode and Debugging Python in Virtual Environments - 6 days ago + 2 months ago
- Jan 08 + Jan 09 @@ -7653,14 +7653,14 @@

Moxy Tongue

- Jul 14 + Jan 08 @@ -7670,14 +7670,14 @@

Moxy Tongue

- Jul 11 + Jan 07 @@ -7687,14 +7687,14 @@

Moxy Tongue

- Apr 02 + Jan 05 @@ -7706,7 +7706,7 @@

Moxy Tongue

- OYO AI + The Pulse #119: Are LLMs making StackOverflow irrelevant? 3 days ago - Trump 2024 + Confessions of a Big Tech recruiter - 6 months ago + 4 days ago
- Root Administrator: Owner + Bug management that works (Part 2) - 6 months ago + 5 days ago
- America Works For People, Building For Happiness.. + How AI-assisted coding will change software engineering: hard truths - 9 months ago + 7 days ago
- Jan 04 + Jan 08 @@ -7736,14 +7736,14 @@

Michael Ruminer

- Nov 04 + Jul 14 @@ -7753,14 +7753,14 @@

Michael Ruminer

- Nov 01 + Jul 11 @@ -7770,14 +7770,14 @@

Michael Ruminer

- Oct 28 + Apr 02 @@ -7809,7 +7809,7 @@

Patrick Breyer

Soll ich der elektronischen Patientenakte widersprechen und wie geht das? - 12 days ago + 13 days ago @@ -7892,7 +7892,7 @@

Georg C. F. Greve

Self Sovereign Identity: Over before it started? - 19 days ago + 20 days ago @@ -8572,7 +8572,7 @@

IDIM Musings

Interesting work happening at the ISO SC 37 Biometrics committee - 12 months ago + over a year ago
- Using LlamaIndex Part 1 — OpenAI + OYO AI - 6 days ago + 4 days ago
- Agents Craft Hackathon and Inspiration Block + Trump 2024 - 2 months ago + 6 months ago
- VSCode and Debugging Python in Virtual Environments + Root Administrator: Owner - 2 months ago + 6 months ago
- CrewAI, Simple Enough but It Once Made 100 API Calls Instead of 1 + America Works For People, Building For Happiness.. - 2 months ago + 9 months ago