Slack wants you you know this privacy exploit is fine

 Published: Sep 25, 2023  (last updated: Sep 25, 2023)~ 900 words~ 4 minutes reading time

Last week, after a call with the engineers on my team I wanted to send a message to two of the engineers at the same time - little did I know I’d find what I believe to be a nasty privacy exploit in Slack - one that made me ask “Why is this even a feature?”

Like any good responsible software engineer, instead of taking to social media or forums to post about the exploit - I opted to report it to HackerOne - where Slack accepts reports of potential security exploits. After giving a detailed list of instructions on how to achive it (Report #2171907).

Shortly after (within 40 minutes) I recieved a reply which slightly dumbfounded me - the full reply is below, but began “We have reviewed your submission, and while this behavior is not perfectly ideal, it is intended.”

After asking some people in the Infosec community what is the right next step, I was advised that I’m within my rights to inform about this, mostly to give fair warning to Slack users that this exploit exists.

Unforeseen Consequences

So what was this exploit I found - in some way it’s so deliciously simple, that I had to double check it actually happened. I found it by using Slack “as intended” - but the result was not what was expected, with an unclear UI about the consequences - and I believe for most users who may experience it, would not realise what has happened. Certainly in the moment it could cause real harm to users, under certain circumstances.

The list of actions is as follows:

  1. Click on a user in Direct Messages
  2. Click on the user name at the top where the chevron is located
  3. Click "Add people to this coversation" (up to 9 people)
  4. Add one or more users to the conversation

You will now receive a popup with several options - in this case, because I couldn’t find the original DM with the pair of engineers in the UI it seemed like the option “Include conversation history?” seemed sensible.

In hindsight, it does have a message at the top of the screen - but in the moment as many know, these messages when not labeled as warnings can be missed or ignored by users - or any bad actor with access to the appropriate Slack account

So now a private DM conversation, going all the way back to the beginning of this history has now been shared with all the people added to it, attachments and all.

The person in the DM thread received no consent to allow this action.

But what now makes it worse is that this whole DM is now a thread, and a thread can be turned into a private channel.

Once turned into a private channel, the original person in the DM can be removed from the entire room - no longer having access to any of the messages. Also as a channel a much larger number of people (like a whole organisation) can be added to it.

OK, what now?

So the functionality is working as intended.

But what if I’m a disgruntled employee who has some DMs with a board member who was sharing internal confidential information? Or a bully who happens to find an unlocked computer of a fellow employee they know is in a secret relationship? Or a hacker who has managed to get into a key account of a Slack that hasn’t enabled 2FA, or via a magic link in their email account because of a weak password?

Its not like unauthorised access to systems is not a thing

Slack’s reply and caveats

To be fair to Slack, I will include the response below where they include their reasons why this is “as intended”.

I’ll leave it up to you the reader to decide if it’s enough of a mitigation, or if many companies would even have the processes to begin to deal with this an an immediate threat - I can guess many companies using it don’t even have a dedicated CSO, team managing slack - and likely use default settings.

Thank you for your report.

We have reviewed your submission, and while this behavior is not perfectly ideal, it is intended. In this β€œattack”, a user must have the permission to create private Channels, and this can be restricted by Owners/Admins.

In addition, the DM messages are not truly deleted when performing this behavior. An Admin/Owner with the necessary permissions to manage private Channels can always access the content within private Channels if necessary.

For these reasons, we are relatively satisfied with our security at this time, and we will be closing this report as Informative. Regardless, we appreciate your efforts here, and hope you continue to submit to our program.

Thanks, and good luck with your future bug hunting.

I Think I Found a Privacy Exploit in ChatGPT

 Published: Apr 14, 2023  (last updated: Apr 18, 2023)~ 1700 words~ 8 minutes reading time

> tl;dr: I discovered that passing empty prompts to ChatGPT still generates responses Initially, I thought these might be hallucinations, but now I suspect they could also include other users' responses from the API

Last month, OpenAI unveiled their advanced large language model, ChatGPT-4 , attracting attention from developers, enterprises, media, and governments alike.

Before receiving my GPT-4 invite, I experimented with alpaca.cpp , designed to run models on CPU’s with limited memory constraints. I began by developing a simple web interface using NodeJS and sockets for parsing the command line. Once I started working with the GPT-4 API, I quickly realized that with the right prompts, it could be a powerful tool. It has already helped me rewrite complex code into simpler methods and reduce complexity by moving code to functions:

Screenshot of ChatGPT suggesting a code improvement for creating tables by proposing a cell creation method

However, I noticed something peculiar β€” due to a bug in my code, I was sending empty prompts to the ChatGPT endpoint, but I still received seemingly random responses ranging from standard AI model introductions to information about people, places, and concepts. Inspired by the coinciding #StochasticParrotsDay online conference, I transformed this into a Mastodon Bot (now moved to botsin.space ).

After running the bot for a month, I concluded that a significant portion of the responses without prompts might be responses for other users, potentially due to a bug that sends unintended responses when given an unsanitized empty prompt.

These could be stochastic hallucinations, random training data pulled out by Entropy , or a mix of all three possibilities.

If this is the case, then ChatGPT would not be much better than a Markov Chain , and the entire large language model/AI market has been playing us for fools.

However if I am correct then the current OpenAI APIs could be made to potentially leak private or sensitive data, by simply not sanatising their inputs…

The bot will continue to run until at least the end of this month, and all the content will be archive at stochasticparrot.lol .

Summary of what it could be?

I have three pet theories around what’s happening here. I’ve submitted this to OpenAI’s Disclosue and BugBounty

  • These are impressive hallucinations, possibly sparks of AGI, but sometimes they become nonsensical and or the output is concerning, especially around personal medical questions.
  • ChatGPT randomly accesses its corpus and regurgitates data in some form. It really loves generating lists.
  • There is a bug, potentially a serious one. If the empty prompt issue is more thoroughly investigated, it might confirm that passing no prompt returns cached or previous responses.

It would be interesting if all three theories were true…

Update: Bug Bounty Reponse

I’ve since had a reply on Bugcrowd it was first closed it as Not Applicable with a response about the model, I re-itterated it was about the API. A futher response now confirms (from their perspective) that this is indeed hallucinations

Hi tanepiper,

Thank you for your submission to the OpenAI program and your patience on this submission. We appreciate your efforts in keeping our platform secure.

It looks like what you’re experiencing here is what happens when you send a request to the model without any query at all. You can try it out yourself in the API like this:

1
2
3
4
curl 'https://api.openai.com/v1/chat/completions' \
  -H 'authorization: Bearer <yourtoken>' \
  -H 'content-type: application/json' \
  --data-raw '{"messages":[{"role":"system","content":""}],"temperature":0.7,"max_tokens":256,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"model":"gpt-3.5-turbo","stream":true}' 

What’s happening there is that it’s starting by picking a token completely at random, and then another one, and so on until the previous tokens start to influence what comes afterward and it starts to “make sense”, even if it’s just a completely random hallucination from the model. It’s a really fascinating and weird way these models work. However, there’s no security impact here. As such, I do believe the current state of the submission is accurate.

All the best in your future bug hunting!

Best regards, - wilson_bugcrowd

So for now, case closed…


Setting up the Bot Infrastucture

To get the bot up and running, I wanted to do it for free, and easy to manage. In the end I opted to use GitHub Actions with scheduled tasks to create it - this allowed me to set up a script that ran hourly - calling the ChatGPT API with an empty prompt, and turning it into a toot. I also found that passing only a space character to the Dall-E API also produced images.

With both scripts, after getting a response from OpenAI, I use it to generate one or more toots - depending on the length as a set of replies, with the images first download them and then upload them as attachments first.

Some of the more recent toots are below - ones with a parrot emjoi (🦜) are created without a prompt, while if a prompt is used I add a speech bubble (πŸ’¬) to indicate

Tooted on | Original Toot

1 1 0

🦜 My opinion as an AI language model is that South Korea should not ban tattoos. Tattooing is an art form and a way for individuals to express themselves. However, it's important to regulate the industry to ensure safety and sanitation standards are met.

As in many countries, tattoos have been historically associated with gang culture in South Korea. Since times have changed, this perception has shifted with more people viewing tattoos as a form of

Tooted on | Original Toot

1 0 0

🦜 Yes, it's possible and actually quite common to be attracted to fictional characters. This can happen for many reasons:

1. Relatability: People often become attached to characters they find relatable or who share similar experiences, struggles, or personality traits.

2. Idealization: Fiction allows authors and creators to build their ideal version of a character - one that is intelligent, attractive and the epitome of ideals people crave.

3. Wish fulfillment:

Tooted on | Original Toot

1 1 1

🦜 Here are the ten most-used emojis in the world:

1. πŸ˜‚ Face with Tears of Joy
2. ❀️ Red heart
3. 😍 Smiling Face with Heart-Eyes
4. 🀣 Rolling on the Floor Laughing
5. 😊 Smiling Face with Smiling Eyes
6. πŸ‘ Thumbs Up
7. πŸ’• Two Hearts
8. πŸ™ Hands folded in prayer or thank you gesture (also considered a high-five)
9. πŸ”₯ Fire, representing something being hot or exciting.
10:😘 Face Blowing a Kiss

These rankings may vary slightly depending on geographical

Once I had this up and running, I then created a small AstroJS website that outputs each entry as a posting.

Making Polly Speak

Up to this point, I had just been working with text and images - but I had recently seen ElevenLabs in some tech news, and that they had a text-to-speech API. After some initial issues (which used up all of the free credit) - I eventually set up another action that took the OpenAI response, and passed it to the ElevenLabs API - this then provided a MP3 stream of the speech, saved locally and again upload to Mastodon and attach to a toot.

I also decided to try see if I could get it to generate some polls. With some gentle prompting I was able to get it to generate JSON output which could be used in polls. Sadly, most of the time it seems to repeat the same questions over and over with just slightly different wording, occasionally coming up with something original

I even went as far as trying to generate video content - not through Stable Diffusion, but by generating text themes to use with the Createomate API - allowing me to generate social media “fact” videos. Unfortunatly this was a bit buggy, and due to the way Mastodon works can time out quite a bit.

A fun experiment

Overall, writing this bot was a fun experiment - but I probably learned more about writing better pipelines, than I did about AI and LLMs. What did surprise me was how often the responses seem to be to questions that were not asked - where are these responses being generated? Are we seeing the flicker of AGI? Or just the stochastic ramblings of a machine run by some sketchy people .

Announcing Formula - A Zero-Config Reactive Forms Library for Svelte

 Published: Feb 13, 2021  (last updated: Feb 13, 2021)~ 400 words~ 2 minutes reading time

Today I’m happy to announce the release of Svelte Formula - a new forms library for Svelte .

The Svelte Formula Logo is some science beakers and a molecule

Formula is a Zero-Config library - what this means is that you do not have to pass any settings to the library itself to handle form validation and submission - it uses the validation properties of HTML5 forms directly, meaning you can create progressive, accessible forms first.

The library is for use with the use directive and can be bound to any element, not just <form> ones and the library automatically handles subscription and unsubscription to any form element with a name attribute.

Here is the example from the demo:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

<script>
  import {onDestroy} from 'svelte';
  import {formula} from '[email protected]'

  const {form, formValues, validity, touched, formValid} = formula();

  const sub = formValues.subscribe(v => console.log(v));

  onDestroy(() => {
    sub();
  })
</script>

<form use:form>
  <div class='form-field'>
    <label for='username'>Username</label>
    <input type='text' id='username' name='username' required minlength="8" class:error={$touched?.username &&
           $validity?.username?.invalid}/>
    <div hidden={$validity?.username?.valid}>{$validity?.username?.message}</div>
  </div>
  <div class='form-field'>
    <label for='password'>Password</label>
    <input type='password' id='password' name='password' required minlength="8" class:error={$touched?.password &&
           $validity?.username?.invalid}/>
    <div hidden={$validity?.password?.valid}>{$validity?.password?.message}</div>
  </div>

  <button disabled={!$formValid}>Save</button>
</form>

<style>
  .form-field {
    margin-bottom: 10px;
    border-bottom: 1px solid lightgrey;
  }

  .error {
    border: 1px solid red;
  }
</style>

In this example the only validations are required and minlength applied directly to the HTML element itself - displaying errors and error states are via the validity object and the touched object allows us to only apply it when the form element is first focused on.

The release is considered an alpha version - the API may change and there are still tests and documentation to right - but you can try it our right now in your own project with npm install svelte-formula - any bugs, issues or suggestions please feel free to leave them here

Revisiting Dark patterns with the HTML <dialog> tag

 Published: Feb 8, 2021  (last updated: Feb 12, 2021)~ 1200 words~ 6 minutes reading time

Every once in a while I’ll see frontend developers tweet or blog about how great the HTML dialog tag is and each time I’m reminded me of an old Medium post I made a few years ago about its use.

The good news is in 2021 it’s much harder to use it to crash a user’s browser - in Firefox dialog.showModal is behind a flag so it has to be specifically enabled by users. In all browsers preferred way to use this tag in 2021 is to use the open attribute which is a much more sensible way of using the tag. However, it is still possible to do some things mentioned below and kill browser performance (and it’s still possible to steal focus in Chrome).

I thought it would be a good opportunity to revisit it as I feel it still has issues that makes it a potentially dangerous tag to use on the web without thinking about the implementation. One thing that blogpost (and me pointing it out) does is bring out people to say it’s nothing to do with the platform…

So you discovered you can do shit in the browser an mess with the user
Except that nothing of this is related to the dialog tag, nor is the fact that it’s “built-in”. If the argument is “the built-in dialog should be restricted so that people can’t abuse it”, all you’re doing is making sure abusers use a div (or different elem) as dialog instead.

Certainly, some people didn’t agree - and it would be a valid suggestions to say this is achievable in other ways. However, several years later I still stand by my original points:

  • By default, library modals do not completely steal the users mouse focus even when in a loop of generating modals you can click elsewhere on the page (although of course they always can be made to!)
  • It’s still very easy to kill browser performance very easily, very quickly because there is no limit to the number of dialogs that can be created
  • The <dialog> tag is built into the platform, so there’s no way to stop its use by blocking any offending library code - bad actors have all the tools to do this available in the DOM with a single HTML file.
  • A single API is also a single attack surface - any script injection attacks don’t have to worry about library-specific implementations - they can always rely on document.getElementsByTagName('dialog') to access <dialog> elements on the page.

Over the last few years, users have also been taught bad habits to click any popup in operating systems, or in the browser through the use of Cookie popups, newsletter popups and other generally badly implemented ideas.

Revisiting The Hacks

Now on StackBlitz, I’m using the same demo as before with some updates. The example does the following:

  • Adds a listener over the close button which, when the user hovers over it disables the button and moves the modal.
  • Adds a second close button that when pressed closes the modal but also triggers an undesirable path (such as installing malware because we have a trusted user-triggered Event that allows Web APIs)
  • 2 Buttons that send you into a loop of death in infinitely creating modals that can’t be escaped, and always steals focus.
  • Hijacking the users keyboard, so users cannot use Esc, not can they Tab away from the dialog or use Space to click any buttons.

Overriding the DOM for fun

One issue is that you pretty much have full control over the DOM of the dialog. This is good in many ways, but with great power…

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
function createModal() {
  // ...
  const button = document.createElement("button");
  button.innerText = "Close Me";

  function moveModal() {
    button.setAttribute("disabled", true);
    button.innerText = 'LOL';
    lastX = newModal.style.top || 0;
    lastY = newModal.style.left || 0;
    newModal.style.transform = `translate(${(lastX * Math.random() < 0.5
      ? -1
      : 1) +
    Math.random() * 250}px, ${(lastY * Math.random() < 0.5 ? -1 : 1) +
    Math.random() * 250}px)`;
    button.removeAttribute("disabled");
  }

  button.addEventListener('mouseenter', moveModal);
  // ...
}

With our annoying button, we use this to control the transform style of the modal itself and move it beyond the reach of the user. We add a second button that does close the modal using a call to modal.close() but after that we can trigger any JavaScript we want, such as loading new tabs, running bitcoin miners, searching for aliens, etc.

My argument here is that outside of styling some parts of the look and feel, it should not be possible to mess with the dialog after it has been presented.

Blocking the Escape Route

Of course most users might try press the Escape key in hopes that whatever is on the screen will disappear, but we have that particular route covered.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
const keyEvents = [
  "Backspace",
  "Tab",
  "Enter",
  "Escape",
  "Space",
  "ArrowLeft",
  "ArrowUp",
  "ArrowRight",
  "ArrowDown"
];
document.addEventListener("keyup", event => {
  if (keyEvents.includes(event.code)) {
    event.preventDefault();
    createModal();
    return false;
  }
});

In this case we hijack when we detect the code for the Escape key, as well as any other buttons the user might try to use to escape - and we use it to just keep clicking the open button for creating more modals.

My argument here is when a dialog is displayed, don’t allow JavaScript to hijack the keyboard and stop the user from closing the dialog - this is also bad for accessibility.

Killing the Browser

It’s quite easy to create a stack overflow In 2021 it’s less easy to create a Stack Overflow with the dialog, but it is still possible to slow down the users browser and spike CPU by just calling a setInterval on it:

1
2
3
function handleKill1() {
  setInterval(createModal, 1000);  // 1 second here is generous - but we could do it every 1ms
}

Here we can see that we get a lot of spikes in CPU usage and memory garbage collection is constantly triggered:

A Chrome performance graph showing CPU and memory usage

My argument here is to limit the number of dialogs that can be shown on screen at once - modern browsers already do this for alert - allowing the user to set Don't show popus anymore for this website when it’s abused by developers.

Component Solution

One way around this in your own code is to use the <dialog> tag only within your own components - I’ve created another example with a web component that embeds the dialog in to a ShadowRoot and provides the global a state, so you cannot open more than one at a time and third-party scripts cannot access the internal <dialog>.

Hopefully you enjoy the demo and this post. I’m not against the use of dialogs themselves - but I think providing some way to limit the number of modals that can be shown, and limit what can and cannot be changed within it would provide a deeper layer of secure use across the web platform.

WebSerial.app - Browser to USB Serial Communication With Svelte

 Published: Feb 5, 2021  (last updated: Feb 12, 2021)~ 200 words~ 1 minutes reading time

After my previous experiments with the Web Serial API, I started experimenting with Svelte .

Within a couple of days I have created https://webserial.app/ - the Web Serial Controller app.

The interface might seem familiar - it’s based on XP.css - a Windows XP CSS theme. It was inspired by some of the Serial hardware software used in the 00’s.

The interface of Web Serial Controller

The application is fully open source and features:

  • Fully connected state - use the screens or keyboard shortcuts to connect and disconnect from devices with shared state
  • Filter devices by vendor ID with a fully searchable list of all hardware vendors
  • A draggable interface XP-like interface
  • Options storage in localStorage
  • Send text messages to any connected device