Browse Source

More cruft deletion, Docker support, GitHub Pages migration

master-legacy
John Mulhausen 5 years ago
committed by Hank Stoever
parent
commit
1cd2f40c2c
  1. 4
      .gitignore
  2. 3
      Dockerfile
  3. 69
      Gemfile
  4. 240
      Gemfile.lock
  5. 197
      README.md
  6. 27
      _config.yml
  7. 2
      core/advanced_usage.md
  8. 36
      core/aglio_templates/core.jade
  9. 357
      core/aglio_templates/mixins.jade
  10. 53
      core/aglio_templates/public.jade
  11. 223
      core/aglio_templates/scripts.js
  12. 2364
      core/api-specs.md
  13. 1
      core/atlas_network.md
  14. 4
      core/attic/README.md
  15. 124
      core/attic/advanced_usage.md
  16. BIN
      core/attic/figures/gaia-authentication.png
  17. BIN
      core/attic/figures/gaia-connect.png
  18. BIN
      core/attic/figures/gaia-getfile.png
  19. BIN
      core/attic/figures/gaia-listdir.png
  20. BIN
      core/attic/figures/gaia-putfile.png
  21. 461
      core/attic/gaia.md
  22. 71
      core/attic/openbazaar.md
  23. 33
      core/attic/resolver.md
  24. 604
      core/attic/tutorial_creation.md
  25. 1
      core/basic_usage.md
  26. 818
      core/blockstack-did-spec.md
  27. 1
      core/blockstack_naming_service.md
  28. 1
      core/cli.md
  29. 148
      core/faq_evaluators.md
  30. 1
      core/gaia.md
  31. 1
      core/glossary.md
  32. 3
      core/interactive_regtest_macros.md
  33. 1
      core/namespace_creation.md
  34. 1
      core/openbazaar.md
  35. 1
      core/resolver.md
  36. 1
      core/search.md
  37. 67
      core/setup_core_portal.md
  38. 1
      core/subdomain.md

4
.gitignore

@ -6,3 +6,7 @@ node_modules
_site _site
.sass-cache .sass-cache
.jekyll-metadata .jekyll-metadata
Gemfile.lock
**/.DS_Store
**/desktop.ini
**/.svn

3
Dockerfile

@ -0,0 +1,3 @@
FROM starefossen/github-pages:198
ADD . /usr/src/app

69
Gemfile

@ -1,40 +1,43 @@
source "https://rubygems.org" source "https://rubygems.org"
# Hello! This is where you manage which Jekyll version is used to run. # Update me once in a while: https://github.com/github/pages-gem/releases
# When you want to use a different version, change it below, save the # Please ensure, before upgrading, that this version exists as a tag in starefossen/github-pages here:
# file and run `bundle install`. Run Jekyll with `bundle exec`, like so: # https://hub.docker.com/r/starefossen/github-pages/tags/
# #
# bundle exec jekyll serve # Fresh install?
# #
# This will help ensure the proper Jekyll version is running. # Windows:
# Happy Jekylling! # Install Ruby 2.3.3 x64 and download the Development Kit for 64-bit:
gem "jekyll", "3.8.6" # https://rubyinstaller.org/downloads/
#
# This is the default theme for new Jekyll sites. You may change this to anything you like. # Run this to install devkit after extracting:
# gem "minima", "~> 2.0" # ruby <path_to_devkit>/dk.rb init
# ruby <path_to_devkit>/dk.rb install
# If you want to use GitHub Pages, remove the "gem "jekyll"" above and #
# uncomment the line below. To upgrade, run `bundle update github-pages`. # then:
# gem "github-pages", group: :jekyll_plugins # gem install bundler
# bundle install
# If you have any plugins, put them here! #
group :jekyll_plugins do # Mac/Linux:
gem "jekyll-feed", "~> 0.6" # Install Ruby 2.3.x and then:
gem 'jekyll-paginate', '~> 1.1' # gem install bundler
gem 'jekyll-seo-tag' # bundle install
gem 'jekyll-gist' #
gem 'jekyll-avatar' # ---------------------
gem 'jekyll-titles-from-headings' # Upgrading? Probably best to reset your environment:
gem 'jekyll-sitemap' #
gem 'jekyll-toc' # Remove all gems:
gem 'jekyll-redirect-from' # gem uninstall -aIx
gem 'jekyll-google-tag-manager' #
end # (If Windows, do the dk.rb bits above, then go to the next step below)
group :development do # Install anew:
gem 'guard' # gem install bundler
end # bundle install
# This only affects interactive builds (local build, Netlify) and not the
# live site deploy, which uses the Dockerfiles found in the publish-tools
# branch.
# Windows does not include zoneinfo files, so bundle the tzinfo-data gem gem "github-pages", "198"
gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby] gem 'wdm' if Gem.win_platform?

240
Gemfile.lock

@ -1,33 +1,93 @@
GEM GEM
remote: https://rubygems.org/ remote: https://rubygems.org/
specs: specs:
activesupport (4.2.11.1)
i18n (~> 0.7)
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
addressable (2.7.0) addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0) public_suffix (>= 2.0.2, < 5.0)
coderay (1.1.2) coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0) colorator (1.1.0)
commonmarker (0.17.13)
ruby-enum (~> 0.5)
concurrent-ruby (1.1.6) concurrent-ruby (1.1.6)
dnsruby (1.61.3)
addressable (~> 2.5)
em-websocket (0.5.1) em-websocket (0.5.1)
eventmachine (>= 0.12.9) eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0) http_parser.rb (~> 0.6.0)
ethon (0.12.0)
ffi (>= 1.3.0)
eventmachine (1.2.7) eventmachine (1.2.7)
faraday (0.17.1) execjs (2.7.0)
faraday (1.0.1)
multipart-post (>= 1.2, < 3) multipart-post (>= 1.2, < 3)
ffi (1.12.2) ffi (1.13.1)
formatador (0.2.5)
forwardable-extended (2.6.0) forwardable-extended (2.6.0)
guard (2.16.1) gemoji (3.0.1)
formatador (>= 0.2.4) github-pages (198)
listen (>= 2.7, < 4.0) activesupport (= 4.2.11.1)
lumberjack (>= 1.0.12, < 2.0) github-pages-health-check (= 1.16.1)
nenv (~> 0.1) jekyll (= 3.8.5)
notiffany (~> 0.0) jekyll-avatar (= 0.6.0)
pry (>= 0.9.12) jekyll-coffeescript (= 1.1.1)
shellany (~> 0.0) jekyll-commonmark-ghpages (= 0.1.5)
thor (>= 0.18.1) jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.11.0)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.12.1)
jekyll-mentions (= 1.4.1)
jekyll-optional-front-matter (= 0.3.0)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.2.0)
jekyll-redirect-from (= 0.14.0)
jekyll-relative-links (= 0.6.0)
jekyll-remote-theme (= 0.3.1)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.5.0)
jekyll-sitemap (= 1.2.0)
jekyll-swiss (= 0.4.0)
jekyll-theme-architect (= 0.1.1)
jekyll-theme-cayman (= 0.1.1)
jekyll-theme-dinky (= 0.1.1)
jekyll-theme-hacker (= 0.1.1)
jekyll-theme-leap-day (= 0.1.1)
jekyll-theme-merlot (= 0.1.1)
jekyll-theme-midnight (= 0.1.1)
jekyll-theme-minimal (= 0.1.1)
jekyll-theme-modernist (= 0.1.1)
jekyll-theme-primer (= 0.5.3)
jekyll-theme-slate (= 0.1.1)
jekyll-theme-tactile (= 0.1.1)
jekyll-theme-time-machine (= 0.1.1)
jekyll-titles-from-headings (= 0.5.1)
jemoji (= 0.10.2)
kramdown (= 1.17.0)
liquid (= 4.0.0)
listen (= 3.1.5)
mercenary (~> 0.3)
minima (= 2.5.0)
nokogiri (>= 1.8.5, < 2.0)
rouge (= 2.2.1)
terminal-table (~> 1.4)
github-pages-health-check (1.16.1)
addressable (~> 2.3)
dnsruby (~> 1.60)
octokit (~> 4.0)
public_suffix (~> 3.0)
typhoeus (~> 1.3)
html-pipeline (2.13.0)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.6.0) http_parser.rb (0.6.0)
i18n (0.9.5) i18n (0.9.5)
concurrent-ruby (~> 1.0) concurrent-ruby (~> 1.0)
jekyll (3.8.6) jekyll (3.8.5)
addressable (~> 2.4) addressable (~> 2.4)
colorator (~> 1.0) colorator (~> 1.0)
em-websocket (~> 0.5) em-websocket (~> 0.5)
@ -40,57 +100,127 @@ GEM
pathutil (~> 0.9) pathutil (~> 0.9)
rouge (>= 1.7, < 4) rouge (>= 1.7, < 4)
safe_yaml (~> 1.0) safe_yaml (~> 1.0)
jekyll-avatar (0.7.0) jekyll-avatar (0.6.0)
jekyll (>= 3.0, < 5.0) jekyll (~> 3.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.3.1)
commonmarker (~> 0.14)
jekyll (>= 3.7, < 5.0)
jekyll-commonmark-ghpages (0.1.5)
commonmarker (~> 0.17.6)
jekyll-commonmark (~> 1)
rouge (~> 2)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.11.0) jekyll-feed (0.11.0)
jekyll (~> 3.3) jekyll (~> 3.3)
jekyll-gist (1.5.0) jekyll-gist (1.5.0)
octokit (~> 4.2) octokit (~> 4.2)
jekyll-google-tag-manager (1.0.3) jekyll-github-metadata (2.12.1)
jekyll (>= 3.3, < 5.0) jekyll (~> 3.4)
octokit (~> 4.0, != 4.4.0)
jekyll-mentions (1.4.1)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
jekyll-optional-front-matter (0.3.0)
jekyll (~> 3.0)
jekyll-paginate (1.1.0) jekyll-paginate (1.1.0)
jekyll-redirect-from (0.15.0) jekyll-readme-index (0.2.0)
jekyll (>= 3.3, < 5.0) jekyll (~> 3.0)
jekyll-redirect-from (0.14.0)
jekyll (~> 3.3)
jekyll-relative-links (0.6.0)
jekyll (~> 3.3)
jekyll-remote-theme (0.3.1)
jekyll (~> 3.5)
rubyzip (>= 1.2.1, < 3.0)
jekyll-sass-converter (1.5.2) jekyll-sass-converter (1.5.2)
sass (~> 3.4) sass (~> 3.4)
jekyll-seo-tag (2.6.1) jekyll-seo-tag (2.5.0)
jekyll (>= 3.3, < 5.0) jekyll (~> 3.3)
jekyll-sitemap (1.2.0) jekyll-sitemap (1.2.0)
jekyll (~> 3.3) jekyll (~> 3.3)
jekyll-titles-from-headings (0.5.3) jekyll-swiss (0.4.0)
jekyll (>= 3.3, < 5.0) jekyll-theme-architect (0.1.1)
jekyll-toc (0.12.2) jekyll (~> 3.5)
nokogiri (~> 1.9) jekyll-seo-tag (~> 2.0)
jekyll-theme-cayman (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-dinky (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-merlot (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-midnight (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-minimal (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-modernist (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.5.3)
jekyll (~> 3.5)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-tactile (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-time-machine (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
jekyll-watch (2.2.1) jekyll-watch (2.2.1)
listen (~> 3.0) listen (~> 3.0)
jemoji (0.10.2)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (~> 3.0)
kramdown (1.17.0) kramdown (1.17.0)
liquid (4.0.3) liquid (4.0.0)
listen (3.2.1) listen (3.1.5)
rb-fsevent (~> 0.10, >= 0.10.3) rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.10) rb-inotify (~> 0.9, >= 0.9.7)
lumberjack (1.0.13) ruby_dep (~> 1.2)
mercenary (0.3.6) mercenary (0.3.6)
method_source (0.9.2)
mini_portile2 (2.4.0) mini_portile2 (2.4.0)
minima (2.5.0)
jekyll (~> 3.5)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.14.1)
multipart-post (2.1.1) multipart-post (2.1.1)
nenv (0.3.0)
nokogiri (1.10.9) nokogiri (1.10.9)
mini_portile2 (~> 2.4.0) mini_portile2 (~> 2.4.0)
notiffany (0.1.3) octokit (4.18.0)
nenv (~> 0.1) faraday (>= 0.9)
shellany (~> 0.0)
octokit (4.14.0)
sawyer (~> 0.8.0, >= 0.5.3) sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2) pathutil (0.16.2)
forwardable-extended (~> 2.6) forwardable-extended (~> 2.6)
pry (0.12.2) public_suffix (3.1.1)
coderay (~> 1.1.0) rb-fsevent (0.10.4)
method_source (~> 0.9.0)
public_suffix (4.0.4)
rb-fsevent (0.10.3)
rb-inotify (0.10.1) rb-inotify (0.10.1)
ffi (~> 1.0) ffi (~> 1.0)
rouge (3.17.0) rouge (2.2.1)
ruby-enum (0.8.0)
i18n
ruby_dep (1.5.0)
rubyzip (2.3.0)
safe_yaml (1.0.5) safe_yaml (1.0.5)
sass (3.7.4) sass (3.7.4)
sass-listen (~> 4.0.0) sass-listen (~> 4.0.0)
@ -100,26 +230,20 @@ GEM
sawyer (0.8.2) sawyer (0.8.2)
addressable (>= 2.3.5) addressable (>= 2.3.5)
faraday (> 0.8, < 2.0) faraday (> 0.8, < 2.0)
shellany (0.0.1) terminal-table (1.8.0)
thor (0.20.3) unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (1.4.0)
ethon (>= 0.9.0)
tzinfo (1.2.7)
thread_safe (~> 0.1)
unicode-display_width (1.7.0)
PLATFORMS PLATFORMS
ruby ruby
DEPENDENCIES DEPENDENCIES
guard github-pages (= 198)
jekyll (= 3.8.6)
jekyll-avatar
jekyll-feed (~> 0.6)
jekyll-gist
jekyll-google-tag-manager
jekyll-paginate (~> 1.1)
jekyll-redirect-from
jekyll-seo-tag
jekyll-sitemap
jekyll-titles-from-headings
jekyll-toc
tzinfo-data
BUNDLED WITH BUNDLED WITH
2.1.4 2.1.4

197
README.md

@ -1,45 +1,46 @@
# README: Overview Documentation Repository # Welcome to the Blockstack documentation repo!
This README explains the user cases, source file organization, and procedures for building the Blockstack documentation. You can find the documentation at https://docs.blockstack.com This README explains the user cases, source file organization, and procedures for building the Blockstack documentation. You can find the documentation at https://docs.blockstack.com
## Use Cases You can also make use of the **Edit this page on GitHub** link from any https://docs.blockstack.org page.
Blockstack is a ecosystem build around a platform. There are several types of users to support with the documentation. Types are exist when they can operate within a vertical of the ecosystem. These are the users that can appear within this ecosystem and that the docs must support. ## Authoring Environment Setup
<table> When setting up your machine for the first time, run:
<tr>
<th>Users</th>
<th>Description</th>
</tr>
<tr>
<th>STX holders</th>
<td>Users who have purchased STX and who use our wallet to move STX holdings. These users want to know about Blockstack as a company and STX as an investment token. They have a Blockstack identity, they use the Blockstack wallet, and may also use the Blockstack explorer.</td>
</tr>
<tr>
<th>DApp users</th>
<td>Users who make use of applications built on the Blockstack platform. These users have a Blockstack identity and typically use the Blockstack Browser at some point.</td>
</tr>
<tr>
<th>Dapp developers</th>
<td>Users who develop applications on the Blockstack platform.</td>
</tr>
<tr>
<th>Hub Providers</th>
<td>Users who sell or maintain a Gai services are hub providers. These users may be more devops user types as opposed to developers.</td>
</tr>
<tr>
<th>Core service programmers</th>
<td>These are users that run Stacks node or who write Clarity contracts. These are also users who build wallets or explorers into the Blockstack platform.</td>
</tr>
</table>
Finally, a key user set but seldom mentioned for any company docs is the company employees. These users are expected to make use of the documentation when onboarding or to support other users. ```bash
#install Ruby Version Manager (rvm.io/rvm/install)
\curl -sSL https://get.rvm.io | bash -s stable
# restart terminal or run the following source command:
source ~/.rvm/scripts/rvm
# install Ruby 2.7 and the needed dependencies
rvm install 2.7
rvm use 2.1 --default
gem install bundler
# make sure you're in the root of your clone of this repo and run
bundle install
```
## Documentation backend Then when authoring, run:
```bash
jekyll serve
```
You can preview your changes by visiting `http://localhost:4000`
## Docker Instructions for Authoring
Don't want to install a bunch of stuff and don't mind the overhead of using Docker? Do this instead:
```bash
docker build -t blockdocs .
docker run -p 4000:4000 -ti -v "$(pwd)":/usr/src/app blockdocs
```
Our documentation is written in Markdown (`.md`), built using [Jekyll](https://jekyllrb.com/), and deployed to a Netlify server. Serving the content from Netlify allows us to use functionality (plugins/javascript) not supported with standard GitHub pages. ## Documentation backend
Blockstack versions it source files in a public GitHub repo (duh :smile). You can submit changes by cloning, forking, and submitting a pull request. You can also make use of the **Edit this page on GitHub** link from any https://docs.blockstack.org page. Our documentation is written in Markdown (`.md`), built using [Jekyll](https://jekyllrb.com/), and deployed to a Netlify server. Serving the content from Netlify allows us to use functionality (plugins/javascript) not supported with standard GitHub pages.
Some content is single sourced. Modifying single source content requires an understanding of that concept, how it works with Liquid tags, and the organization of this repo's source files. Some content is single sourced. Modifying single source content requires an understanding of that concept, how it works with Liquid tags, and the organization of this repo's source files.
@ -56,52 +57,52 @@ Directories that contain information used to build content.
<th>Technical Repo(s)</th> <th>Technical Repo(s)</th>
</tr> </tr>
<tr> <tr>
<td>_android</td> <td>android</td>
<td>SDK tutorial. Part of the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_learn.yml">developer menu</a>.</td> <td>SDK tutorial.</td>
<td><a href="https://github.com/blockstack/blockstack-android">https://github.com/blockstack/blockstack-android</a></td> <td><a href="https://github.com/blockstack/blockstack-android">https://github.com/blockstack/blockstack-android</a></td>
</tr> </tr>
<tr> <tr>
<td>_browser</td> <td>browser</td>
<td>Information for end-users about our identity, Storage, and using the browser. There are also three of the original tutorials in there. User docs controlled by in the the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_usenew.yml">user menu</a>. The three tutorials that appear in the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_learn.yml">developer menu</a> There is an <a href="https://github.com/blockstack/docs.blockstack/issues/501" target="_blank">outstanding issue</a> to move these.</td> <td>Information for end-users about our identity, Storage, and using the browser.</td>
<td><a href="https://github.com/blockstack/blockstack-browser">https://github.com/blockstack/blockstack-browser</a></td> <td><a href="https://github.com/blockstack/blockstack-browser">https://github.com/blockstack/blockstack-browser</a></td>
</tr> </tr>
<tr> <tr>
<td>_common</td> <td>common</td>
<td>Contains several shell files that redirect to our reference documentation sites such as Javascript, IOS, and so forth. The reference docs are linked from the developer, core, and Gaia menus.</td> <td>Contains several shell files that redirect to our reference documentation sites such as Javascript, IOS, and so forth. The reference docs are linked from the developer, core, and Gaia menus.</td>
<td>Each of these references are generated by their respective repos, core.blockstack.org from <code>blockstack-core</code>, Javascript docs from the <code>blockstack.js</code> and so forth.</td> <td>Each of these references are generated by their respective repos, core.blockstack.org from <code>blockstack-core</code>, Javascript docs from the <code>blockstack.js</code> and so forth.</td>
</tr> </tr>
<tr> <tr>
<td>_core</td> <td>core</td>
<td>Information for wallet, blockchain, or Clarity developers -- including Atlas, BNS, and so forth. <b>This content STILL needs to be synced with the master docs subdirectory in <a href="https://github.com/blockstack/blockstack-core/tree/master/docs" target="_blank">blockstack-core</a>.</b></td> <td>Information for wallet, blockchain, or Clarity developers -- including Atlas, BNS, and so forth. <b>This content STILL needs to be synced with the master docs subdirectory in <a href="https://github.com/blockstack/blockstack-core/tree/master/docs">blockstack-core</a>.</b></td>
<td> <a href="https://github.com/blockstack/blockstack-core/tree/master/docs" target="_blank">blockstack-core</a></td> <td> <a href="https://github.com/blockstack/blockstack-core/tree/master/docs">blockstack-core</a></td>
</tr> </tr>
<tr> <tr>
<td>_develop</td> <td>develop</td>
<td>Information for application developers covers using the Javascript library, our mobile SDKs, and the concepts hat support them. Navigation controlled by <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_learn.yml">developer menu</a> </td> <td>Information for application developers covers using the Javascript library, our mobile SDKs, and the concepts hat support them.</td>
<td>This information was never associated with a single repo. Much of it does rely heavily on <a href="https://github.com/blockstack/blockstack.js">https://github.com/blockstack/blockstack.js</a>.</td> <td>This information was never associated with a single repo. Much of it does rely heavily on <a href="https://github.com/blockstack/blockstack.js">https://github.com/blockstack/blockstack.js</a>.</td>
</tr> </tr>
<tr> <tr>
<td>_faqs</td> <td>faqs</td>
<td>Contains files for single-sourcing all the FAQs. The Blockstack docs has a single page that <a href="https://docs.blockstack.org/faqs/allfaqs" target="_blank">lists all the faqs</a>; then several pages in different sections re-use this information. See the FAQs section below for detail about how these files figure into FAQS.</td> <td>Contains files for single-sourcing all the FAQs. The Blockstack docs has a single page that <a href="https://docs.blockstack.org/faqs/allfaqs">lists all the faqs</a>; then several pages in different sections re-use this information. See the FAQs section below for detail about how these files figure into FAQS.</td>
<td>Not related to repo.</td> <td>Not related to repo.</td>
</tr> </tr>
<tr> <tr>
<td>_includes</td> <td>-includes</td>
<td>Information reused (markdown or html) in many places, common html used in pages and notes. </td> <td>Information reused (markdown or html) in many places, common html used in pages and notes. </td>
<td>These files don't correspond to a repository.</td> <td>These files don't correspond to a repository.</td>
</tr> </tr>
<tr> <tr>
<td>_ios</td> <td>ios</td>
<td>SDK tutorial. Part of the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_learn.yml">developer menu</a>.</td> <td>SDK tutorial.</td>
<td><a href="https://github.com/blockstack/blockstack-ios">https://github.com/blockstack/blockstack-ios</a></td> <td><a href="https://github.com/blockstack/blockstack-ios">https://github.com/blockstack/blockstack-ios</a></td>
</tr> </tr>
<tr> <tr>
<td>_org</td> <td>org</td>
<td>Information for Stacks holders and people curious about what Blockstack does. Appear in the the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_org.yml">organization menu</a></td> <td>Information for Stacks holders and people curious about what Blockstack does</td>
<td>Not associated with any repository.</td> <td>Not associated with any repository.</td>
<tr> <tr>
<td>_storage</td> <td>storage</td>
<td>Information for developers using storage in their apps or creating Gaia servers. Appear in the the <a href="https://github.com/blockstack/docs.blockstack/blob/master/_data/navigation_storage.yml">storage menu</a></td> <td>Information for developers using storage in their apps or creating Gaia servers.</td>
<td><a href="https://github.com/blockstack/blockstack-gaia">https://github.com/blockstack/blockstack-gaia</a></td> <td><a href="https://github.com/blockstack/blockstack-gaia">https://github.com/blockstack/blockstack-gaia</a></td>
</tr> </tr>
</table> </table>
@ -114,19 +115,15 @@ These are the other directories in the site structure:
<th>Purpose</th> <th>Purpose</th>
</tr> </tr>
<tr> <tr>
<th>_data</th> <th>-data</th>
<td>JSON source files for the FAQS, CLI, and clarity reference. Menu configurations YAML files. The CSV file with the glossary. </td> <td>JSON source files for the FAQS, CLI, and clarity reference. Menu configurations YAML files. The CSV file with the glossary. </td>
</tr> </tr>
<tr> <tr>
<th>_layouts</th> <th>-layouts</th>
<td>Layouts for various pages. The community layout is significantly different from the other layouts.</td> <td>Layouts for various pages. The community layout is significantly different from the other layouts.</td>
</tr> </tr>
<tr>
<th>_plugins</th>
<td>Code files for plugins.</td>
</tr>
<tr> <tr>
<th>_sass</th> <th>-sass</th>
<td>Style folder including customizations.</td> <td>Style folder including customizations.</td>
</tr> </tr>
<tr> <tr>
@ -135,73 +132,13 @@ These are the other directories in the site structure:
</tr> </tr>
</table> </table>
## Building the documentation source for display
If you are making significant changes to the documentation, you should build and view the entire set locally before submitting a PR.
## Run locally
To run locally:
1. Install Jekyll into your workstation environment
2. Build and serve locally.
```
bundle exec jekyll serve --config _config.yml,staticman.yml --livereload
```
Use this format to turn on production features:
```
JEKYLL_ENV=production bundle exec jekyll serve --config _config.yml
```
## Test a Deploy with Surge
You can also do a test deploy using a tool like [Surge](https://surge.sh/).
```
cd _site
surge
```
Make sure you delete the deployed Surge domain when you are done. Using the `teardown` command.
```
surge – single command web publishing. (v0.21.3)
Usage:
surge <project> <domain>
Options:
-a, --add adds user to list of collaborators (email address)
-r, --remove removes user from list of collaborators (email address)
-V, --version show the version number
-h, --help show this help message
Additional commands:
surge whoami show who you are logged in as
surge logout expire local token
surge login only performs authentication step
surge list list all domains you have access to
surge teardown tear down a published project
surge plan set account plan
Guides:
Getting started surge.sh/help/getting-started-with-surge
Custom domains surge.sh/help/adding-a-custom-domain
Additional help surge.sh/help
When in doubt, run surge from within your project directory.
```
## Deployment of the site ## Deployment of the site
The documentation is deployed to Netlify using the following command: The documentation is deployed to Netlify using the following command:
``` ```
JEKYLL_ENV=production bundle exec jekyll build --config _config.yml JEKYLL_ENV=production bundle exec jekyll build --config _config.yml
```
## Generated documentation ## Generated documentation
@ -214,7 +151,7 @@ The `_data/cliRef.json` file is generated from the `blockstack-cli` subcommand `
2. Generate the json for the cli in the `docs.blockstack` repo. 2. Generate the json for the cli in the `docs.blockstack` repo.
``` ```
$ blockstack-cli docs | python -m json.tool > _data/cliRef.json $ blockstack-cli docs | python -m json.tool > _data/cliRef.json
``` ```
3. Make sure the generated docs are clean by building the documentation. 3. Make sure the generated docs are clean by building the documentation.
@ -247,7 +184,7 @@ As of 8/12/19 Clarity is in the [develop](https://github.com/blockstack/blocksta
``` ```
$ docker run --name docsbuild -it blockstack-test blockstack-core docgen | jsonpp > ~/repos/docs.blockstack/_data/clarityRef.json $ docker run --name docsbuild -it blockstack-test blockstack-core docgen | jsonpp > ~/repos/docs.blockstack/_data/clarityRef.json
``` ```
This generates the JSON source files which are consumed through Liquid templates into markdown. This generates the JSON source files which are consumed through Liquid templates into markdown.
7. Rebuild the documentation site with Jekyll. 7. Rebuild the documentation site with Jekyll.
@ -297,7 +234,7 @@ You can view [the source code](https://github.com/blockstack/blockstack-core/blo
The FAQ system servers single-sourced content that support the FAQs that appear in blockstack.org, and stackstoken.com site. We have FAQs that fall into these categories: The FAQ system servers single-sourced content that support the FAQs that appear in blockstack.org, and stackstoken.com site. We have FAQs that fall into these categories:
* general * general
* appusers * appusers
* dappdevs * dappdevs
* coredevs * coredevs
@ -311,10 +248,10 @@ FAQs are usually written internally by a team that are unfamiliar with markdown
1. Draft new or revised FAQs in a Google or Paper document. 1. Draft new or revised FAQs in a Google or Paper document.
2. Review the drafts and approve them. 2. Review the drafts and approve them.
3. Convert the FAQ document to HTML. 3. Convert the FAQ document to HTML.
4. Strip out the unnecessary codes such as `id` or `class` designations. 4. Strip out the unnecessary codes such as `id` or `class` designations.
This leaves behind plain html. This leaves behind plain html.
5. Add the new FAQs to the `_data/theFAQS.json` file. 5. Add the new FAQs to the `_data/theFAQS.json` file.
Currently this is manually done through cut and paste. Currently this is manually done through cut and paste.
6. Copy the JSON for `appminers` categories to the `_data/appFAQ.json` file. 6. Copy the JSON for `appminers` categories to the `_data/appFAQ.json` file.
7. Run the Jekyll build and verify the content builds correctly by viewing this `LOCAL_HOST/faqs/allfaqs` 7. Run the Jekyll build and verify the content builds correctly by viewing this `LOCAL_HOST/faqs/allfaqs`
8. Push your changes to the site and redeploy. 8. Push your changes to the site and redeploy.
@ -343,9 +280,3 @@ FAQs are usually written internally by a team that are unfamiliar with markdown
<td>Source file for all the FAQs.</td> <td>Source file for all the FAQs.</td>
</tr> </tr>
</table> </table>
# Technology Reference
* [UIKit](https://getuikit.com/docs/grid)
* [Liquid templates](https://shopify.github.io/liquid/)

27
_config.yml

@ -129,40 +129,13 @@ include:
- _redirects - _redirects
- _data - _data
exclude: exclude:
- package-lock.json
- package.json
- Guardfile
- '*/glossary.md' - '*/glossary.md'
- '*/README.md' - '*/README.md'
- README.md - README.md
- THEME_README.md
- Gemfile - Gemfile
- Gemfile.lock - Gemfile.lock
- 'node_modules'
- collections.json - collections.json
- get-content.sh - get-content.sh
- _core/aglio_templates
- _core/attic
- _core/api-specs.md
- _core/setup_core_portal.md
- _core/advanced_usage.md
- _core/atlas_network.md
- _core/basic_usage.md
- _core/blockstack_naming_service.md
- _core/cli.md
- _core/blockstack-did-spec.md
- _core/faq_evaluators.md
- _core/gai.md
- _core/glossary.md
- _core/interactive_regtest_macros.md
- _core/namespace_creation.md
- _core/openbazaar.md
- _core/resolver.md
- _core/search.md
- _core/subdomain.md
- _data/*.yml
- _data/*.csv
- _data/cliRef.json
sass: sass:
style: compressed style: compressed

2
core/advanced_usage.md

@ -1,2 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

36
core/aglio_templates/core.jade

@ -1,36 +0,0 @@
doctype
include mixins.jade
html
head
meta(charset="utf-8")
title= self.api.name || 'API Documentation'
link(rel="stylesheet", href="https://maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css")
style!= self.css
body.preload
#nav-background
div.container-fluid.triple
.row
block nav
+Nav(false)
.content
#right-panel-background
block content
+ContentTriple(false)
.middle
p.text-muted(style="text-align: center;")
script: include scripts.js
if self.livePreview
script(src="/socket.io/socket.io.js")
script.
var socket = io();
socket.on('refresh', refresh);
socket.on('reconnect', function () {
socket.emit('request-refresh');
});

357
core/aglio_templates/mixins.jade

@ -1,357 +0,0 @@
mixin TryMe(action)
//- Give a "try-me" link for the public api endpoint
- var myUri = action.uriTemplate
- action.parameters.forEach( function (x) { myUri = myUri.replace( "{" + x.name + "}", x.example) } )
.title
strong
h4
div
div
span.method(class="badge get",style="float:left")
a.method(href=myUri, style="color:rgb(51, 122, 183);font-size:12pt")
= "Try It!"
| &nbsp;
p
div
| &nbsp;
mixin Badge(method)
//- Draw a badge for a given HTTP method
case method
when 'GET'
span.badge.get: i.fa.fa-arrow-down
when 'HEAD'
span.badge.head: i.fa.fa-info-circle
when 'OPTIONS'
span.badge.options: i.fa.fa-dot-circle-o
when 'POST'
span.badge.post: i.fa.fa-plus
when 'PUT'
span.badge.put: i.fa.fa-pencil
when 'PATCH'
span.badge.patch: i.fa.fa-pencil
when 'DELETE'
span.badge.delete: i.fa.fa-times
default
span.badge: i.fa.fa-dot-circle-o
mixin Nav(onlyPublic)
//- Draw a navigation bar, which includes links to individual
//- resources and actions.
nav
if self.api.navItems && self.api.navItems.length
.resource-group
.heading
.chevron
i.open.fa.fa-angle-down
a(href='#top') Overview
.collapse-content
ul: each item in self.api.navItems
li
a(href=item[1])!= item[0]
- if (onlyPublic){
- myGroups = self.api.resourceGroups.filter( filter_public_resourcegroups )
- }else{
- myGroups = self.api.resourceGroups.filter( filter_core_resourcegroups )
- }
each resourceGroup in myGroups || []
.resource-group
.heading
.chevron
i.open.fa.fa-angle-down
a(href=resourceGroup.elementLink)!= resourceGroup.name || 'Resource Group'
.collapse-content
ul
each item in resourceGroup.navItems || []
li
a(href=item[1])!= item[0]
- if (onlyPublic){
- myResources = resourceGroup.resources.filter( filter_public_resources )
- }else{
- myResources = resourceGroup.resources.filter( filter_core_resources )
- }
each resource in myResources || []
li
- if (onlyPublic){
- myActions = resource.actions.filter( filter_public_actions )
- }else{
- myActions = resource.actions.filter( filter_core_actions )
- }
if !self.condenseNav || (myActions.length != 1)
a(href=resource.elementLink)!= resource.name || 'Resource'
ul: each action in myActions || []
li: a(href=resource.elementLink)
+Badge(action.method)
!= action.name || action.method + ' ' + (action.attributes && action.attributes.uriTemplate || resource.uriTemplate)
else
- var action = myActions[0]
a(href=resource.elementLink)
+Badge(action.method)
!= action.name || resource.name || action.method + ' ' + (action.attributes && action.attributes.uriTemplate || resource.uriTemplate)
//- Link to the API hostname, e.g. api.yourcompany.com
each meta in self.api.metadata || {}
if meta.name == 'HOST'
p(style="text-align: center; word-wrap: break-word;")
a(href=meta.value)= meta.value
mixin Parameters(params)
//- Draw a definition list of parameter names, types, defaults,
//- examples and descriptions.
.title
strong URI Parameters
.collapse-button.show
span.close Hide
span.open Show
.collapse-content
dl.inner: each param in params || []
dt= self.urldec(param.name)
dd
code= param.type || 'string'
| &nbsp;
if param.required
span.required (required)
else
span (optional)
| &nbsp;
if param.default
span.text-info.default
strong Default:&nbsp;
span= param.default
| &nbsp;
if param.example
span.text-muted.example
strong Example:&nbsp;
span= param.example
!= self.markdown(param.description)
if param.values.length
p.choices
strong Choices:&nbsp;
each value in param.values
code= self.urldec(value.value)
= ' '
mixin RequestResponse(title, request, collapse)
.title
strong
= title
if request.name
| &nbsp;&nbsp;
code= request.name
if collapse && request.hasContent
.collapse-button
span.close Hide
span.open Show
+RequestResponseBody(request, collapse)
mixin RequestResponseBody(request, collapse, showBlank)
if request.hasContent || showBlank
div(class=collapse ? 'collapse-content' : ''): .inner
if request.description
.description!= self.markdown(request.description)
if Object.keys(request.headers).length
h5 Headers
pre: code
each item, index in request.headers
!= self.highlight(item.name + ': ' + item.value, 'http')
if index < request.headers.length - 1
br
div(style="height: 1px;")
if request.body
h5 Body
pre: code
!= self.highlight(request.body, null, ['json', 'yaml', 'xml', 'javascript'])
div(style="height: 1px;")
if request.schema
h5 Schema
pre: code
!= self.highlight(request.schema, null, ['json', 'yaml', 'xml'])
div(style="height: 1px;")
if !request.hasContent
.description.text-muted This response has no content.
div(style="height: 1px;")
mixin Examples(resourceGroup, resource, action)
each example in action.examples
each request in example.requests
+RequestResponse('Request', request, true)
each response in example.responses
+RequestResponse('Response', response, true)
mixin Content()
//- Page header and API description
header
h1#top!= self.api.name || 'API Documentation'
if self.api.descriptionHtml
!= self.api.descriptionHtml
//- Loop through and display information about all the resource
//- groups, resources, and actions.
each resourceGroup in self.api.resourceGroups || []
section.resource-group(id=resourceGroup.elementId)
h2.group-heading
!= resourceGroup.name || 'Resource Group'
= " "
a.permalink(href=resourceGroup.elementLink) &para;
if resourceGroup.descriptionHtml
!= resourceGroup.descriptionHtml
each resource in resourceGroup.resources || []
.resource(id=resource.elementId)
h3.resource-heading
!= resource.name || ((resource.actions[0] != null) && resource.actions[0].name) || 'Resource'
= " "
a.permalink(href=resource.elementLink) &nbsp;&para;
if resource.description
!= self.markdown(resource.description)
each action in resource.actions || []
.action(class=action.methodLower, id=action.elementId)
h4.action-heading
.name!= action.name
a.method(class=action.methodLower, href=action.elementLink)
= action.method
code.uri= self.urldec(action.uriTemplate)
if action.description
!= self.markdown(action.description)
h4 Example URI
.definition
span.method(class=action.methodLower)= action.method
| &nbsp;
span.uri
span.hostname= self.api.host
!= action.colorizedUriTemplate
//- A list of sub-sections for parameters, requests
//- and responses.
if action.parameters.length
+Parameters(action.parameters)
if action.examples
+Examples(resourceGroup, resource, action)
- function filter_public_actions(x){
- return (x.description.includes('+ Public Endpoint') || x.description.includes('+ Public Only Endpoint'))
- }
- function filter_public_resources(x){
- return (x.actions.filter( filter_public_actions ).length > 0)
- }
- function filter_public_resourcegroups(x){
- return (x.resources.filter( filter_public_resources ).length > 0)
- }
- function filter_core_actions(x){
- return !(x.description.includes('+ Public Only Endpoint'))
- }
- function filter_core_resources(x){
- return (x.actions.filter( filter_core_actions ).length > 0)
- }
- function filter_core_resourcegroups(x){
- return (x.resources.filter( filter_core_resources ).length > 0)
- }
mixin ContentTriple(onlyPublic, descriptionHtml)
.right
h5 API Endpoint
a(href=self.api.host)= self.api.host
.middle
if descriptionHtml
!= descriptionHtml
//- Loop through and display information about all the resource
//- groups, resources, and actions.
- if (onlyPublic){
- myGroups = self.api.resourceGroups.filter( filter_public_resourcegroups )
- }else{
- myGroups = self.api.resourceGroups.filter( filter_core_resourcegroups )
- }
each resourceGroup in myGroups || []
.middle
section.resource-group(id=resourceGroup.elementId)
h2.group-heading
!= resourceGroup.name || 'Resource Group'
= " "
a.permalink(href=resourceGroup.elementLink) &para;
if resourceGroup.descriptionHtml
!= resourceGroup.descriptionHtml
- if (onlyPublic){
- myResources = resourceGroup.resources.filter( filter_public_resources )
- }else{
- myResources = resourceGroup.resources.filter( filter_core_resources )
- }
each resource in myResources || []
if resource.public != null
.middle
.resource(id=resource.elementId)
a.permalink(href=resource.elementLink)
h3.resource-heading
!= resource.name || ((resource.actions[0] != null) && resource.actions[0].name) || 'Resource'
= " "
&para;
if resource.description
!= self.markdown(resource.description)
- if (onlyPublic){
- myActions = resource.actions.filter( filter_public_actions )
- }else{
- myActions = resource.actions.filter( filter_core_actions )
- }
each action in myActions || []
if action.examples
.right
.definition
span.method(class=action.methodLower)= action.method
| &nbsp;
span.uri
span.hostname= self.api.host
!= action.colorizedUriTemplate
.tabs
if action.hasRequest
.example-names
span Requests
- var requestCount = 0
each example in action.examples
each request in example.requests
- requestCount++
span.tab-button= request.name || 'example ' + requestCount
each example in action.examples
each request in example.requests
.tab
+RequestResponseBody(request, false, true)
.tabs
.example-names
span Responses
each response in example.responses
span.tab-button= response.name
each response in example.responses
.tab
+RequestResponseBody(response, false, true)
else
each example in action.examples
.tabs
.example-names
span Responses
each response in example.responses
span.tab-button= response.name
each response in example.responses
.tab
+RequestResponseBody(response, false, true)
.middle
.action(class=action.methodLower, id=action.elementId)
h4.action-heading
.name!= action.name
a.method(class=action.methodLower, href=action.elementLink)
= action.method
code.uri= self.urldec(action.uriTemplate)
if action.description
!= self.markdown(action.description)
//- A list of sub-sections for parameters, requests
//- and responses.
if action.parameters.length
+Parameters(action.parameters)
if onlyPublic
+TryMe(action)
hr.split

53
core/aglio_templates/public.jade

@ -1,53 +0,0 @@
doctype
include mixins.jade
html
head
meta(charset="utf-8")
title= 'Stacks Node'
link(rel="stylesheet", href="https://maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css")
style!= self.css
body.preload
#nav-background
div.container-fluid.triple
.row
block nav
+Nav(true)
.content
#right-panel-background
.middle
header
h1#top!= 'Getting Started'
p!= 'Welcome to this deployment of Stacks Node v{{server_info.server_version}}. You can read the documentation and make RESTful calls to this node.'
p
table
tr
td!= 'Consensus hash'
td!= '{{server_info.consensus}}'
tr
td!= 'Last block seen'
td!= '{{server_info.last_block_seen}}'
tr
td!= 'Last block processed'
td!= '{{server_info.last_block_processed}}'
p!= 'Stacks Node is open-source software released under a GPLv3 license. The code for this API is <a href="https://github.com/blockstack/blockstack-core/tree/master/api">available on Github</a> and you can deploy your own nodes by following <a href="https://github.com/blockstack/blockstack-core/tree/master/api">these instructions</a>.'
block content
+ContentTriple(true)
.middle
p.text-muted(style="text-align: center;")
script: include scripts.js
if self.livePreview
script(src="/socket.io/socket.io.js")
script.
var socket = io();
socket.on('refresh', refresh);
socket.on('reconnect', function () {
socket.emit('request-refresh');
});

223
core/aglio_templates/scripts.js

@ -1,223 +0,0 @@
/* eslint-env browser */
/* eslint quotes: [2, "single"] */
'use strict';
/*
Determine if a string ends with another string.
*/
function endsWith(str, suffix) {
return str.indexOf(suffix, str.length - suffix.length) !== -1;
}
/*
Get a list of direct child elements by class name.
*/
function childrenByClass(element, name) {
var filtered = [];
for (var i = 0; i < element.children.length; i++) {
var child = element.children[i];
var classNames = child.className.split(' ');
if (classNames.indexOf(name) !== -1) {
filtered.push(child);
}
}
return filtered;
}
/*
Get an array [width, height] of the window.
*/
function getWindowDimensions() {
var w = window,
d = document,
e = d.documentElement,
g = d.body,
x = w.innerWidth || e.clientWidth || g.clientWidth,
y = w.innerHeight || e.clientHeight || g.clientHeight;
return [x, y];
}
/*
Collapse or show a request/response example.
*/
function toggleCollapseButton(event) {
var button = event.target.parentNode;
var content = button.parentNode.nextSibling;
var inner = content.children[0];
if (button.className.indexOf('collapse-button') === -1) {
// Clicked without hitting the right element?
return;
}
if (content.style.maxHeight && content.style.maxHeight !== '0px') {
// Currently showing, so let's hide it
button.className = 'collapse-button';
content.style.maxHeight = '0px';
} else {
// Currently hidden, so let's show it
button.className = 'collapse-button show';
content.style.maxHeight = inner.offsetHeight + 12 + 'px';
}
}
function toggleTabButton(event) {
var i, index;
var button = event.target;
// Get index of the current button.
var buttons = childrenByClass(button.parentNode, 'tab-button');
for (i = 0; i < buttons.length; i++) {
if (buttons[i] === button) {
index = i;
button.className = 'tab-button active';
} else {
buttons[i].className = 'tab-button';
}
}
// Hide other tabs and show this one.
var tabs = childrenByClass(button.parentNode.parentNode, 'tab');
for (i = 0; i < tabs.length; i++) {
if (i === index) {
tabs[i].style.display = 'block';
} else {
tabs[i].style.display = 'none';
}
}
}
/*
Collapse or show a navigation menu. It will not be hidden unless it
is currently selected or `force` has been passed.
*/
function toggleCollapseNav(event, force) {
var heading = event.target.parentNode;
var content = heading.nextSibling;
var inner = content.children[0];
if (heading.className.indexOf('heading') === -1) {
// Clicked without hitting the right element?
return;
}
if (content.style.maxHeight && content.style.maxHeight !== '0px') {
// Currently showing, so let's hide it, but only if this nav item
// is already selected. This prevents newly selected items from
// collapsing in an annoying fashion.
if (force || window.location.hash && endsWith(event.target.href, window.location.hash)) {
content.style.maxHeight = '0px';
}
} else {
// Currently hidden, so let's show it
content.style.maxHeight = inner.offsetHeight + 12 + 'px';
}
}
/*
Refresh the page after a live update from the server. This only
works in live preview mode (using the `--server` parameter).
*/
function refresh(body) {
document.querySelector('body').className = 'preload';
document.body.innerHTML = body;
// Re-initialize the page
init();
autoCollapse();
document.querySelector('body').className = '';
}
/*
Determine which navigation items should be auto-collapsed to show as many
as possible on the screen, based on the current window height. This also
collapses them.
*/
function autoCollapse() {
var windowHeight = getWindowDimensions()[1];
var itemsHeight = 64; /* Account for some padding */
var itemsArray = Array.prototype.slice.call(
document.querySelectorAll('nav .resource-group .heading'));
// Get the total height of the navigation items
itemsArray.forEach(function (item) {
itemsHeight += item.parentNode.offsetHeight;
});
// Should we auto-collapse any nav items? Try to find the smallest item
// that can be collapsed to show all items on the screen. If not possible,
// then collapse the largest item and do it again. First, sort the items
// by height from smallest to largest.
var sortedItems = itemsArray.sort(function (a, b) {
return a.parentNode.offsetHeight - b.parentNode.offsetHeight;
});
while (sortedItems.length && itemsHeight > windowHeight) {
for (var i = 0; i < sortedItems.length; i++) {
// Will collapsing this item help?
var itemHeight = sortedItems[i].nextSibling.offsetHeight;
if ((itemsHeight - itemHeight <= windowHeight) || i === sortedItems.length - 1) {
// It will, so let's collapse it, remove its content height from
// our total and then remove it from our list of candidates
// that can be collapsed.
itemsHeight -= itemHeight;
toggleCollapseNav({target: sortedItems[i].children[0]}, true);
sortedItems.splice(i, 1);
break;
}
}
}
}
/*
Initialize the interactive functionality of the page.
*/
function init() {
var i, j;
// Make collapse buttons clickable
var buttons = document.querySelectorAll('.collapse-button');
for (i = 0; i < buttons.length; i++) {
buttons[i].onclick = toggleCollapseButton;
// Show by default? Then toggle now.
if (buttons[i].className.indexOf('show') !== -1) {
toggleCollapseButton({target: buttons[i].children[0]});
}
}
var responseCodes = document.querySelectorAll('.example-names');
for (i = 0; i < responseCodes.length; i++) {
var tabButtons = childrenByClass(responseCodes[i], 'tab-button');
for (j = 0; j < tabButtons.length; j++) {
tabButtons[j].onclick = toggleTabButton;
// Show by default?
if (j === 0) {
toggleTabButton({target: tabButtons[j]});
}
}
}
// Make nav items clickable to collapse/expand their content.
var navItems = document.querySelectorAll('nav .resource-group .heading');
for (i = 0; i < navItems.length; i++) {
navItems[i].onclick = toggleCollapseNav;
// Show all by default
toggleCollapseNav({target: navItems[i].children[0]});
}
}
// Initial call to set up buttons
init();
window.onload = function () {
autoCollapse();
// Remove the `preload` class to enable animations
document.querySelector('body').className = '';
};

2364
core/api-specs.md

File diff suppressed because it is too large

1
core/atlas_network.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

4
core/attic/README.md

@ -1,4 +0,0 @@
# Legacy, Deprecated, or No longer Useful Documentation
Documents here are out-of-date but preserved for posterity. Do not rely on
them.

124
core/attic/advanced_usage.md

@ -1,124 +0,0 @@
# Advanced Usage
This section details some of the advanced features in the CLI.
## A Word of Warning
Advanced features are meant to be used by experienced Blockstack users and developers, They receive less UI/UX testing than basic features, and their interfaces are expected to change to accomodate bugfixes and security fixes. Moreover, improper use of some advanced methods can cost you money, corrupt your profile, or compromise your wallet. Once they receive sufficient testing, an advanced feature may become a basic-mode feature in a subsequent release.
**Do not use advanced mode unless you know what you are doing!**
## Activating Advanced Mode
To activate advanced mode, use the command `blockstack set_advanced_mode on`.
To deactivate it later (recommended), use the command `blockstack set_advanced_mode off`.
## Changing or Using Exiting Keys
If you already have a payment key you want to use, or an owner key you want to migrate over, you can generate a wallet directly with `import_wallet`. We recommend using this command interactively, so you know which keys correspond to which usages.
## Accounts
With the accounts methods, you can directly manage your social proofs, link existing services to your profile, and store small bits of information.
The account management methods are:
* `get_account`: Look up an account in a name's profile. There can be more than one match.
* `list_accounts`: List all accounts in a name's profile.
* `put_account`: Add or update an account in a name's profile.
* `delete_account`: Remove an account from a name's profile. This may need to be done more than once, if there are duplicates of the account.
## Advanced Blockstack ID Queries
Beyond `lookup` and `whois`, there are a few other more advanced queries you can run on Blockstack IDs. These include:
### Listing Blockstack IDs
* `get_all_names`: Get the list of every single Blockstack ID in existance.
* `get_names_owned_by_address`: Get the list of names owned by a particular ownership address.
### Querying the Blockchain
* `get_name_blockchain_record`: Get the raw database record for a Blockstack ID. It will contain a *compressed* history of all name operations that have affected it. This is meant primarily for debugging purposes; to get an easier-to-parse listing of the information this command returns, use `get_name_blockchain_history`.
* `get_name_blockchain_history`: Get the set of all prior states a Blockstack ID has been in, keyed by the block heights at which the state-change was processed.
* `get_records_at`: Get the list of name operation records processed at a particular block height.
* `list_update_history`: Get the list of all zonefile hashes that a Blockstack ID has ever had.
### Zonefiles
* `get_name_zonefile`: Get only a Blockstack ID's zonefile.
* `list_zonefile_history`: Get the list of all zonefiles a Blockstack ID has ever had. **NOTE:** There is no guarantee that the server will hold copies of old zonefiles. This command is meant mainly for determining which historic zonefiles a server has processed.
* `set_zonefile_hash`: This is the counterpart to `update`, but instead of setting the zonefile directly and uploading it to storage, you can use this command to directly set the data hash for a Blockstack ID. **NOTE:** You should ensure that the associated zonefile data has been replicated off-chain to a place where other users can get at it.
### Lightweight Queries
The lightweight lookup protocol for Blockstack is called *Simplified Name Verification* (SNV). This command returns a prior blockchain-level record given a more recent known-good consensus hash, serial number, or transaction ID of a transaction that contains a consensus hash. The CLI does not need to trust the Blockstack server to use these commands.
* `lookup_snv`: Use the Merkle skip-list in the name database to look up a historic name operation on a Blockstack ID.
## Consensus Queries
You can query consensus hash information from the server with the following commands:
* `consensus`: Get the consensus hash at a particular block height
## Namespace Queries
In addition to querying Blockstack IDs, the CLI has advanced commands for querying namespaces. These include:
* `get_namespace_blockchain_record`: Get the raw database record for a Blockstack namespace. It will contain a *compressed* history of all namespace operations that have affected it.
* `get_names_in_namespace`: Get the list of every Blockstack ID in a particular namespace.
* `get_namespace_cost`: Get the cost required to preorder a namespace. Does *not* include the cost to reveal and ready it, nor does it include the transaction fees.
## Namespace Creation
**WARNING:** We do not recommend that you try to do this by yourself. Creating a namespace is **EXTREMELY EXPENSIVE**. If you are interested in creating your own namespace, please contact the Blockstack developers on the [Blockstack Slack](http://chat.blockstack.org).
These methods allow you to create a namespace. There are three steps: preordering, revealing, and readying. Preordering a namespace is like preordering a name--you announce the hash of the namespace ID and the address that will control it. Revealing a namespace not only reveals the namespace ID, but also sets the pricing and lifetime rules for names in the namespace. After revealing the namespace, the namespace controller can pre-populate the namespace by importing Blockstack IDs. Once the namespace has been pre-populated, the controller sends a final transaction that readies the namespace for general use.
* `namespace_preorder`: Preorder a namespace.
* `namespace_reveal`: Reveal a namespace, and set its pricing and lifetime parameters. **NOTE:** This must be done within 144 blocks of sending the namespace preorder transaction.
* `name_import`: Import a name into a revealed (but not readied) namespace. You can set its owner address and zonefile hash directly.
* `namespace_ready`: Open a namespace for general registrations.
## Data Storage
Blockstack allows users to store arbitrary data to any set of storage providers for which the CLI has a driver. The data will be signed by the user's data key, so when other users read the data later on, they can verify that it is authentic (i.e. the storage provider is not trusted). Moreover, Blockstack is designed such that users don't have to know or care about which storage providers were used--as far as users can see, storage providers are just shared hard drives.
There are two types of data supported by Blockstack: *mutable* data, and *immutable* data. Mutable data is linked by the profile, and can be written as fast and as frequently as the storage provider allows. Mutable data is addressed by URL.
**WARNING:** While mutable data guarantees end-to-end authenticity, there is a chance that a malicious storage provider can serve new readers stale versions of the data. That is, users who have read the latest data already will not get tricked into reading stale data, but users who have *not yet* read the latest data *can* be tricked (i.e. the CLI keeps a version number for mutable data to do so). This must be taken into account if you intend to use this API.
Immutable data, however, is content-addressed, and its cryptographic hash is stored to the user's zonefile. Writing immutable data will entail updating the zonefile and sending an `update` transaction (handled internally), so it will be slow by comparison. This has the advantage that storage providers cannot perform the aforementioned stale data attack, but has the downside that writes cost money and take a long time to complete.
That said, we recommend using the mutable data API with several different storage providers whenever possible.
### Mutable Data
The following commands affect mutable data:
* `get_mutable`: Use the profile to look up and fetch a piece of mutable data.
* `put_mutable`: Add a link to mutable data to the profile, and replicate the signed data itself to all storage providers. Other users will need the data's name to read it with `get_mutable`.
* `delete_mutable`: Remove a link to mutable data from the profile, and ask all storage providers to delete the signed data.
### Immutable Data
The following commnds affect immutable data:
* `get_immutable`: Look up and fetch a piece of immutable data. You can supply either the name of the data, or its hash (both are stored in the zonefile, so there is no gain or loss of security in this choice).
* `put_immutable`: Replicate a piece of data to all storage providers, add its name and hash to the zonefile, and issue an `update` to upload the new zonefile to Blockstack servers and write the hash to the blockchain.
* `delete_immutable`: Remove the link to the data from the zonefile, ask all storage providers to delete the data, and issue an `update` to upload the new zonefile to Blockstack servers and write the new hash to the blockchain.
* `list_immutable_data_history`: Given the name of a piece of immutable data, query the zonefile history to find the historic list of hashes it has had. **NOTE:** Like `list_zonefile_history` above, this only returns data hashes for the data if the Blockstack server has the historic zonefile.
## Fault Recovery
Sometimes, things beyond our control can happen. Transactions can get stuck, storage providers can go offline or corrupt data, and so on. These commands are meant to assist in recovering from these problems:
* `set_profile`: Directly set a Blockstack ID's profile. All previous accounts, data links, etc. must be included in the new profile, since the old profile (if still present) will be overwritten by the one given here.
* `convert_legacy_profile`: Given a legacy profile taken from a resolver, directly convert it into a new profile. This can be used with `set_profile` to recover from a failed profile migration.
* `unqueue`: If a transaction gets lost or stuck, you can remove it from the CLI's transaction queue with this command. This will allow you to re-try it.
* `rpcctl`: This lets you directly start or stop the Blockstack CLI's background daemon, which lets you recover from any crashes it experiences (you can find a trace of its behavior in `~/.blockstack/api_endpoint.log`)
## Programmatic Access
Other programs may want to make RPC calls the Blockstack CLI daemon. They can do so using either the `blockstack_client` Python package, or they can do so via the CLI as follows:
* `rpc`: Issue a JSON RPC call. Takes a raw JSON string that encodes a list of arguments.

BIN
core/attic/figures/gaia-authentication.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

BIN
core/attic/figures/gaia-connect.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

BIN
core/attic/figures/gaia-getfile.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

BIN
core/attic/figures/gaia-listdir.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

BIN
core/attic/figures/gaia-putfile.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

461
core/attic/gaia.md

@ -1,461 +0,0 @@
# LEGACY DOCUMENTATION
Please see the [latest Gaia documentation](https://github.com/blockstack/gaia)
Gaia: The Blockstack Storage System
====================================
The Blockstack storage system, called "Gaia", is used to host each user's data
without requiring users to run their own servers.
Gaia works by hosting data in one or more existing storage systems of the user's choice.
These storage systems include cloud storage systems like Dropbox and Google
Drive, they include personal servers like an SFTP server or a WebDAV server, and
they include decentralized storage systems like BitTorrent or IPFS. The point
is, the user gets to choose where their data lives, and Gaia enables
applications to access it via a uniform API.
A high-level analogy is to compare Gaia to the VFS and block layer in a UNIX
operating system kernel, and to compare existing storage systems to block
devices. Gaia has "drivers" for each storage system that allow it to load,
store, and delete chunks of data via a uniform interface, and it gives
applications a familiar API for organizing their data.
Applications interface with Gaia via the [Stacks Node
API](https://github.com/blockstack/blockstack-core/tree/master/api). Javascript
applications connect to Gaia using [Blockstack Portal](https://github.com/blockstack/blockstack-portal),
which helps them bootstrap a secure connection to Stacks Blockchain.
# Datastores
Gaia organizes data into datastores. A **datastore** is a filesystem-like
collection of data that is backed by one or more existing storage systems.
When a user logs into an application, the application will create or connect to
the datastore that holds the user's data. Once connected, it can proceed to
interact with its data via POSIX-like functions: `mkdir`, `listdir`, `rmdir`,
`getFile()`, `putFile()`, `deleteFile`, and `stat`.
A datastore has exactly one writer: the user that creates it. However, all data
within a datastore is world-readable by default, so other users can see the
owner's writes even when the owner is offline. Users manage access controls
by encrypting files and directories to make them readable to other specific users.
All data in a datastore is signed by a datastore-specific key on write, in order
to guarantee that readers only consume authentic data.
The application client handles all of the encryption and signing. The other
participants---Blockstack Portal, Stacks Node, and the storage
systems---only ferry data back and forth between application clients.
## Data Organization
True to its filesystem inspiration, data in a datastore is organized into a
collection of inodes. Each inode has two parts:
* a **header**, which contains:
* the inode type (i.e. file or directory)
* the inode ID (i.e. a UUID4)
* the hash of the data it stores
* the size of the data it stores
* a signature from the user
* the version number
* the ID of the device from which it was sent (see Advanced Topics below)
* a **payload**, which contains the raw bytes to be stored.
* For files, this is just the raw bytes.
* For directories, this is a serialized data structure that lists the names
and inode IDs of its children, as well as a copy of the header.
The header has a fixed length, and is somewhat small--only a few hundred bytes.
The payload can be arbitrarily large.
## Data Consistency
The reason for organizing data this way is to make cross-storage system reads
efficient, even when there are stale copies of the data available. In this
organization, reading an inode's data is a matter of:
1. Fetching all copies of the header
2. Selecting the header with the highest version number
3. Fetching the payload from the storage system that served the latest header.
This way, we can guarantee that:
* The inode payload is fetched *once* in the common case, even if there are multiple stale copies of the inode available.
* All clients observe the *strongest* consistency model offerred by the
underlying storage providers.
* All readers observe a *minimum* consistency of monotonically-increasing reads.
* Writers observe sequential consistency.
This allows Gaia to interface with decentralized storage systems that make
no guarantees regarding data consistency.
*(Aside 1: The Core node keeps track of the highest last-seen inode version number,
so if all inodes are stale, then no data will be returned).*
*(Aside 2: In step 3, an error path exists whereby all storage systems will be
queried for the payload if the storage system that served the fresh inode does
not have a fresh payload).*
# Accessing the Datastore
Blockstack applications get access to the datastore as part of the sign-in
process. Suppose the user wishes to sign into the application `foo.app`. Then,
the following protocol is executed:
![Gaia authentication](/docs/figures/gaia-authentication.png)
1. Using `blockstack.js`, the application authenticates to Blockstack Portal via
`makeAuthRequest()` and `redirectUserToSignIn()`.
2. When Portal receives the request, it contacts the user's Core node to get the
list of names owned by the user.
3. Portal redirects the user to a login screen, and presents the user with the
list of names to use. The user selects which name to sign in as.
4. Now that Portal knows which name to use, and which application is signing in,
it loads the datastore private key and requests a Stacks Blockchain session
token. This token will be used by the application to access Gaia.
5. Portal creates an authentication response with `makeAuthResponse()`, which it
relays back to the application.
6. The application retrieves the datastore private key and the Core session
token from the authentication response object.
## Creating a Datastore
Once the application has a Core session token and the datastore private key, it
can proceed to connect to it, or create it if it doesn't exist. To do so, the
application calls `datastoreConnectOrCreate()`.
This method contacts the Core node directly. It first requests the public
datastore record, if it exists. The public datastore record
contains information like who owns the datastore, when it was created, and which
drivers should be used to load and store its data.
![Gaia connect](/docs/figures/gaia-connect.png)
Suppose the user signing into `foo.app` does not yet have a datastore, and wants
to store her data on storage providers `A`, `B`, and `C`. Then, the following
protocol executes:
1. The `datastoreConnectOrCreate()` method will generate a signed datastore record
stating that `alice.id`'s public key owns the datastore, and that the drivers
for `A`, `B`, and `C` should be loaded to access its data.
2. The `datastoreConnectOrCreate()` method will call `mkdir()` to create a
signed root directory.
3. The `datastoreConnectOrCreate()` method will send these signed records to the Core node.
The Core node replicates the root directory header (blue path), the root
direcory payload (green path), and the datastore record (gold path).
4. The Core node then replicates them with drivers `A`, `B`, and `C`.
Now, storage systems `A`, `B`, and `C` each hold a copy of the datastore record
and its root directory.
*(Aside: The datastore record's consistency is preserved the same way as the
inode consistency).*
## Reading Data
Once the application has a Core session token, a datastore private key, and a
datastore connection object, it can proceed to read it. The available methods
are:
* `listDir()`: Get the contents of a directory
* `getFile()`: Get the contents of a file
* `stat()`: Get a file or directory's header
Reading data is done by path, just as it is in UNIX. At a high-level, reading
data involes (1) resolving the path to the inode, and (2) reading the inode's
contents.
Path resolution works as it does in UNIX: the root directory is fetched, then
the first directory in the path, then the second directory, then the third
directory, etc., until either the file or directory at the end of the path is
fetched, or the name does not exist.
### Authenticating Data
Data authentication happens in the Core node,.
This is meant to enable linking files and directories to legacy Web
applications. For example, a user might upload a photo to a datastore, and
create a public URL to it to share with friends who do not yet use Blockstack.
By default, the Core node serves back the inode payload data
(`application/octet-stream` for files, and `application/json` for directories).
The application client may additionally request the signatures from the Core
node if it wants to authenticate the data itself.
### Path Resolution
Applications do not need to do path resolution themselves; they simply ask the
Stacks Node to do so on their behalf. Fetching the root directory
works as follows:
1. Get the root inode ID from the datastore record.
2. Fetch all root inode headers.
3. Select the latest inode header, and then fetch its payload.
4. Authenticate the data.
For example, if a client wanted to read the root directory, it would call
`listDir()` with `"/"` as the path.
![Gaia listdir](/docs/figures/gaia-listdir.png)
The blue paths are the Core node fetching the root inode's headers. The green
paths are the Core node selecting the latest header and fetching the root
payload. The Core node would reply the list of inode names within the root
directory.
Once the root directory is resolved, the client simply walks down the path to
the requested file or directory. This involves iteratively fetching a
directory, searching its children for the next directory in the path, and if it
is found, proceeding to fetch it.
### Fetching Data
Once the Core node has resolved the path to the base name, it looks up the inode
ID from the parent directory and fetches it from the backend storage providers
via the relevant drivers.
For example, fetching the file `/bar` works as follows:
![Gaia getFile](/docs/figures/gaia-getfile.png)
1. Resolve the root directory (blue paths)
2. Find `bar` in the root directory
3. Get `bar`'s headers (green paths)
4. Find the latest header for `bar`, and fetch its payload (gold paths)
5. Return the contents of `bar`.
## Writing Data
There are three steps to writing data:
* Resolving the path to the inode's parent directory
* Creating and replicating the new inode
* Linking the new inode to the parent directory, and uploading the new parent
directory.
All of these are done with both `putFile()` and `mkdir()`.
### Creating a New Inode
When it calls either `putFile()` or `mkdir()`, the application client will
generate a new inode header and payload and sign them with the datastore private
key. Once it has done so successfully, it will insert the new inode's name and
ID into the parent directory, give the parent directory a new version number,
and sign and replicate it and its header.
For example, suppose the client attempts to write the data `"hello world"` to `/bar`.
To do so:
![Gaia putFile](/docs/figures/gaia-putfile.png)
1. The client executes `listDir()` on the parent directory, `/` (blue paths).
2. If an inode by the name of `bar` exists in `/`, then the method fails.
3. The client makes a new inode header and payload for `bar` and signs them with
the datastore private key. It replicates them to the datastore's storage
drivers (green paths).
4. The client adds a record for `bar` in `/`'s data obtained from (1),
increments the version for `/`, and signs and replicates `/`'s header and
payload (gold paths).
### Updating a File or Directory
A client can call `putFile()` multiple times to set the file's contents. In
this case, the client creates, signs, and replicates a new inode header and new
inode payload for the file. It does not touch the parent directory at all.
In this case, `putFile()` will only succeed if the parent directory lists an
inode with the given name.
A client cannot directly update the contents of a directory.
## Deleting Data
Deleting data can be done with either `rmdir()` (to remove an empty directory)
or `deleteFile()` (to remove a file). In either case, the protocol executed is
1. The client executes `listDir()` on the parent directory
2. If an inode by the given name does not exist, then the method fails.
3. The client removes the inode's name and ID from the directory listing, signs
the new directory, and replicates it to the Stacks Node.
4. The client tells the Stacks Node to delete the inode's header and
payload from all storage systems.
# Advanced Topics
These features are still being implemented.
## Data Integrity
What happens if the client crashes while replicating new inode state? What
happens if the client crashes while deleting inode state? The data hosted in
the underlying data stores can become inconsistent with the directory structure.
Given the choice between leaking data and rendering data unresolvable, Gaia
chooses to leak data.
### Partial Inode-creation Failures
When creating a file or directory, Gaia stores four records in this order:
* the new inode payload
* the new inode header
* the updated parent directory payload
* the updated parent directory header
If the new payload replicates successfully but the new header does not, then the
new payload is leaked.
If the new payload and new header replicate successfully, but neither parent
directory record succeeds, then the new inode header and payload are leaked.
If the new payload, new header, and updated parent directory payload replicate
successfully, but the updated parent header fails, then not only are the new
inode header and payload leaked, but also *reading the parent directory will
fail due to a hash mismatch between its header and inode*. This can be detected
and resolved by observing that the copy of the header in the parent directory
payload has a later version than the parent directory header indicates.
### Partial Inode-deletion Failures
When deleting a file or directory, Gaia alters records in this order:
* update parent directory payload
* update parent directory header
* delete inode header
* delete inode payload
Similar to partial failures from updating parent directories when creating
files, if the client replicates the new parent directory inode payload but fails
before it can update the header, then clients will detect on the next read that
the updated payload is valid because it has a signed inode header with a newer
version.
If the client successfully updates the parent directory but fails to delete
either the inode header or payload, then they are leaked. However, since the
directory was updated, no correct client will access the deleted inode data.
### Leak Recovery
Gaia's storage drivers are designed to keep the inode data they store in a
"self-contained" way (i.e. within a single folder or bucket). In the future,
we will implement a `fsck`-like tool that will scan through a datastore and find
the set of inode headers and payloads that are no longer referenced and delete
them.
## Multi-Device Support
Contemporary users read and write data across multiple devices. In this
document, we have thus far described datastores with the assumption that there
is a single writer at all times.
This assumption is still true in a multi-device setting, since a user is
unlikely to be writing data with the same application simultaneously from two
different devices.
However, an open question is how multiple devices can access the same
application data for a user. Our design goal is to **give each device its own
keyring**, so if it gets lost, the user can revoke its access without having to
re-key her other devices.
To do so, we'll expand the definition of a datastore to be a **per-user,
per-application, and per-device** collection of data. The view of a user's
application data will be constructed by merging each device-specific
datastore, and resolving conflicts by showing the "last-written" inode (where
"last-written" is determined by a loosely-synchronized clock).
For example, if a user uploads a profile picture from their phone, and then
uploads a profile picture from their tablet, a subsequent read will query the
phone-originated picture and the tablet-originated picture, and return the
tablet-originated picture.
The aforementioned protocols will need to be extended to search for inode
headers not only on each storage provider, but also search for inodes on the
same storage provider that may have been written by each of the user's devices.
Without careful optimization, this can lead to a lot of inode header queries,
which we address in the next topic.
## A `.storage` Namespace
Stacks Nodes can already serve as storage "gateways". That is, one
node can ask another node to store its data and serve it back to any reader.
For example, Alice can make her Stacks Node public and program it to
store data to her Amazon S3 bucket and her Dropbox account. Bob can then post data to Alice's
node, causing her node to replicate data to both providers. Later, Charlie can
read Bob's data from Alice's node, causing Alice's node to fetch and serve back
the data from her cloud storage. Neither Bob nor Charlie have to set up accounts on
Amazon S3 and Dropbox this way.
Since Alice is on the read/write path between Bob and Charlie and cloud storage,
she has the opportunity to make optimizations. First, she can program her
Core node to synchronously write data to
local disk and asynchronously back it up to S3 and Dropbox. This would speed up
Bob's writes, but at the cost of durability (i.e. Alice's node could crash
before replicating to the cloud).
In addition, Alice can program her Core node to service all reads from disk. This
would speed up Charlie's reads, since he'll get the latest data without having
to hit back-end cloud storage providers.
Since Alice is providing a service to Bob and Charlie, she will want
compensation. This can be achieved by having both of them send her money via
the underlying blockchain.
To do so, she would register her node's IP address in a
`.storage` namespace in Blockstack, and post her rates per GB in her node's
profile and her payment address. Once Bob and Charlie sent her payment, her
node would begin accepting reads and writes from them up to the capacity
purchased. They would continue sending payments as long as Alice provides them
with service.
Other experienced node operators would register their nodes in `.storage`, and
compete for users by offerring better durability, availability, performance,
extra storage features, and so on.

71
core/attic/openbazaar.md

@ -1,71 +0,0 @@
---
---
# How to link your OpenBazaar GUID to your Blockstack ID
{:.no_toc}
* TOC
{:toc}
If you don't have the Blockstack CLI. Download and install it first. Instructions are [here](https://github.com/blockstack/blockstack-cli/blob/master/README.md). The rest of this tutorial assumes that you've already registered a name using the Blockstack CLI.
## Step 1: Advanced Mode
The first step is to activate "advanced mode" in the CLI. The command to do so is:
```
$ blockstack set_advanced_mode on
```
## Step 2: Add an OpenBazaar Account
The second step is to create an OpenBazaar account for your profile (the [Onename](https://onename.com) app also enabled to link your OpenBazaar GUID). The command to do so is:
```
$ blockstack put_account "<BLOCKSTACK ID>" "openbazaar" "<YOUR OB GUID>" "<URL TO YOUR STORE>"
```
The URL can be any valid URL; it won't be used by OpenBazaar. Here's an example, using the name `testregistration001.id` and the GUID `0123456789abcdef`:
```
$ blockstack put_account "testregistration001.id" "openbazaar" "0123456789abcdef" "https://bazaarbay.org/@testregistration001"
```
The update should be instantaneous. You can verify that your store is present with `list_accounts`:
```
$ blockstack list_accounts "testregistration001.id"
{
"accounts": [
{
"contentUrl": "https://bazaarbay.org/@testregistration001.id",
"identifier": "0123456789abcdef",
"service": "openbazaar"
}
]
}
````
# Troubleshooting
Common problems you might encounter.
## Profile is in legacy format
If you registered your blockstack ID before spring 2016, there's a chance that your profile is still in a legacy format. It will get migrated to the new format automatically if you update your profile on the [Onename](https://onename.com) app. However, you have to do this manually with the CLI.
To do so, the command is:
```
$ blockstack migrate <YOUR BLOCKSTACK ID>
```
It will take a little over an hour to complete, but once finished, you'll be able to manage your accounts with the above commands (and do so with no delays).
## Failed to broadcast update transaction
This can happen during a `migrate` for one of a few reasons:
* You do not have enough balance to pay the transaction fee (which is calculated dynamically).
* Your payment address has unconfirmed transactions.
* You can't connect to a Bitcoin node.
To determine what's going on, you should try the command again by typing `BLOCKSTACK_DEBUG=1 blockstack ...` instead of `blockstack...`.

33
core/attic/resolver.md

@ -1,33 +0,0 @@
# Blockstack Resolver
During 2014-2016, Bockstack resolver was a separate service (like DNS resolvers).
It was merged into the Blockstack API in early 2017.
The following (legacy) API call is still being supported by the Blockstack API:
```
http://localhost:5000/v2/users/fredwilson
```
And you can see a legacy resolver in action at http://resolver.onename.com/v2/users/fredwilson
## Cron Job for Namespaces
**Note: the instructions below need updating.**
Currently, the resolver indexes all valid names in a local file which can be
populated by running
> $ ./refresh_names.sh
On a production deployment, you should add a crond job to periodically run this
script. You can edit your crontab file by:
> $ crontab -e
Here is a sample crontab file that runs the refresh script every two hours:
```
SHELL=/bin/bash
HOME=/home/ubuntu
#This is a comment
0 */2 * * * /home/ubuntu/resolver/resolver/refresh_names.sh
```

604
core/attic/tutorial_creation.md

@ -1,604 +0,0 @@
---
---
# Create and Launch a Namespace
{:.no_toc}
This tutorial teaches you how to create your namespace, it contains the
following sections:
* TOC
{:toc}
Creating namespaces is expensive.
Be sure to test your namespace in our [integration test
framework](https://github.com/blockstack/blockstack-core/tree/master/integration_tests)
first! It will let you simulate any valid namespace configuration
you want at no risk to you.
>**WARNING**: If you intend to create a namespace, you must read this document
_in its entirety_. You should also _install the test framework_ and experiment
with your namespace's parameters. _FAILURE TO DO SO MAY RESULT IN IRRECOVERABLE
LOSS OF FUNDS._
## Before you begin
Some basic familiarity with how Bitcoin works is required to
understand this tutorial. This includes:
* knowing the difference between mainnet, testnet, and regtest
* knowing about compressed and uncompressed ECDSA public keys
* knowing about base58-check encoding
* knowing how Bitcoin transactions are structured
* knowing how UTXOs work
Creating a namespace is a three-step process. The first step is to `preorder`
the namespace, which broadcasts a salted hash of the namespace ID. The second
step is to `reveal` the namespace, which exposes the namespace ID and price
function to the blockchain. The final step is to `ready` the namespace, which
allows anyone to register names within it.
In between the `reveal` and `ready` steps, the namespace creator will have a
"lock" on the namespace that lasts for about 1 year. During this time period,
the namespace creator can `import` names. The `import` transaction lets the
namespace creator assign the name a zone file and an owner in one step.
## Before Trying This in Production...
### Setting up the Test Environment
In this example, we will use the test framework to create a private Bitcoin
blockchain on your computer, and then create a Blockstack namespace on it.
This will let you experiment with different namespace parameters
without spending actual BTC. The test framework uses `bitcoind -regtest`,
so all of the commands you'll run here will work identically on
mainnet.
To install the test framework, please follow these
[instructions](https://github.com/blockstack/blockstack-core/tree/master/integration_tests).
Once you have the test framework installed, you should run the `namespace_check` test in `--interactive-web` mode.
This will create an empty `.test` namespace and leave the test scenario running
once it finishes. You will be able to fund addresses and create new blocks via
your Web browser or via `curl`, as will be explained below. Also, you'll be able to use the
`blockstack` utility to interact with your private blockchain and namespaces.
The test setup command is as follows. This will launch the `namespace_check`
test scenario, and open a web server on port 3001.
```bash
$ blockstack-test-scenario --interactive-web 3001 blockstack_integration_tests.scenarios.namespace_check
```
When the test is ready for us to experiment, you should see the following:
```bash
An empty namespace called 'test' has been created
Feel free to experiment with other namespaces
Available keys with a balance:
* 6e50431b955fe73f079469b24f06480aee44e4519282686433195b3c4b5336ef01
* c244642ce0b4eb68da8e098facfcad889e3063c36a68b7951fb4c085de49df1b01
* f4c3907cb5769c28ff603c145db7fc39d7d26f69f726f8a7f995a40d3897bb5201
* 8f87d1ea26d03259371675ea3bd31231b67c5df0012c205c154764a124f5b8fe01
* bb68eda988e768132bc6c7ca73a87fb9b0918e9a38d3618b74099be25f7cab7d01
* 2,3,6f432642c087c2d12749284d841b02421259c4e8178f25b91542c026ae6ced6d01,65268e6267b14eb52dc1ccc500dc2624a6e37d0a98280f3275413eacb1d2915d01,cdabc10f1ff3410082448b708c0f860a948197d55fb612cb328d7a5cc07a6c8a01
* 2,3,4c3ab2a0704dfd9fdc319cff2c3629b72ebda1580316c7fddf9fad1baa323e9601,75c9f091aa4f0b1544a59e0fce274fb1ac29d7f7e1cd020b66f941e5d260617b01,d62af1329e541871b244c4a3c69459e8666c40b683ffdcb504aa4adc6a559a7701
* 2,3,4b396393ca030b21bc44a5eba1bb557d04be1bfe974cbebc7a2c82b4bdfba14101,d81d4ef8123852403123d416b0b4fb25bcf9fa80e12aadbc08ffde8c8084a88001,d0482fbe39abd9d9d5c7b21bb5baadb4d50188b684218429f3171da9de206bb201
* 2,3,836dc3ac46fbe2bcd379d36b977969e5b6ef4127e111f2d3e2e7fb6f0ff1612e01,1528cb864588a6a5d77eda548fe81efc44180982e180ecf4c812c6be9788c76a01,9955cfdac199b8451ccd63ec5377a93df852dc97ea01afc47db7f870a402ff0501
```
You can determine that the test framework is live by going to
`http://localhost:3001` in your Web browser. From there, you can generate
blocks in the test framework's `bitcoind` node and you can fund any address in
the test framework.
Finally, you can use the `blockstack-test-env` command to set up your shell
environment variables so `blockstack` will interact with this test (instead of
mainnet). To do so, run the following in your shell:
```bash
$ . $(which blockstack-test-env) namespace_check
|blockstack-test namespace_check| $
```
You can verify that the environment variables by verifying that your `$PS1`
variable includes the name of your test (as shown above), and that some other
`BLOCKSTACK_`-prefixed variables are set:
```bash
|blockstack-test namespace_check| $ env | grep BLOCKSTACK
BLOCKSTACK_OLD_PS1=\u@\h:\w$
BLOCKSTACK_TESTNET=1
BLOCKSTACK_EPOCH_1_END_BLOCK=1
BLOCKSTACK_EPOCH_2_END_BLOCK=2
BLOCKSTACK_TEST=1
BLOCKSTACK_DEBUG=1
BLOCKSTACK_CLIENT_CONFIG=/tmp/blockstack-run-scenario.blockstack_integration_tests.scenarios.namespace_check/client/client.ini
```
## Registering a Namespace
Suppose we're going to create the `hello` namespace. The key
`6e50431b955fe73f079469b24f06480aee44e4519282686433195b3c4b5336ef01` will be the key that
*pays* for the namespace. The key
`c244642ce0b4eb68da8e098facfcad889e3063c36a68b7951fb4c085de49df1b01` will be the key that
*creates* the namespace. The creator key will be used to `import` names and
declare the namespace `ready`. The payment key will be used to both pay for the
namespace and receive name registration and renewal fees for the first year of
the namespace's lifetime.
In this example, we will set these keys as environment variables:
```bash
|blockstack-test namespace_check| $ export PAYMENT_PKEY="6e50431b955fe73f079469b24f06480aee44e4519282686433195b3c4b5336ef01"
|blockstack-test namespace_check| $ export CREATOR_PKEY="c244642ce0b4eb68da8e098facfcad889e3063c36a68b7951fb4c085de49df1b01"
```
#### Multisig Namespace Payment
If you want to use a multisig address to pay for your namespace (and collect
name registration fees), then instead of using
`6e50431b955fe73f079469b24f06480aee44e4519282686433195b3c4b5336ef01`, you should
use a string formatted as `m,n,pk1,pk2,...,pk_n`. `m` is the number of
signatures required, `n` is the number of private keys, and `pk1,pk2,...,pk_n`
are the private keys.
For example, you can use the following as your `PAYMENT_PKEY` to have a 2-of-3
multisig script pay for your namespace and collect name registration fees:
```bash
|blockstack-test namespace_check| $ export PAYMENT_PKEY="2,3,6f432642c087c2d12749284d841b02421259c4e8178f25b91542c026ae6ced6d01,65268e6267b14eb52dc1ccc500dc2624a6e37d0a98280f3275413eacb1d2915d01,cdabc10f1ff3410082448b708c0f860a948197d55fb612cb328d7a5cc07a6c8a01"
```
### Namespace preorder
The command to preorder the namespace would be:
```bash
|blockstack-test namespace_check| $ blockstack namespace_preorder hello "$PAYMENT_PKEY" "$CREATOR_PKEY"
```
You will be given a set of instructions on how to proceed to reveal and
launch the namespace. _READ THEM CAREFULLY_. You will be prompted to
explicitly acknowledge that you understand the main points of the instructions,
and that you understand the risks.
The command outputs some necessary information at the very end of its execution.
In particular, you will need to remember the transaction ID of the namespace
preorder. The command will help you do so.
Here is a sample output:
```bash
|blockstack-test namespace_check| $ blockstack namespace_preorder hello "$PAYMENT_PKEY" "$CREATOR_PKEY"
<...snip...>
Remember this transaction ID: b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac
You will need it for `blockstack namespace_reveal`
Wait until b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac has six (6) confirmations. Then, you can reveal `hello` with:
$ blockstack namespace_reveal "hello" "6e50431b955fe73f079469b24f06480aee44e4519282686433195b3c4b5336ef01" "c244642ce0b4eb68da8e098facfcad889e3063c36a68b7951fb4c085de49df1b01" "b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac"
{
"status": true,
"success": true,
"transaction_hash": "b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac"
}
```
If all goes well, you will get back a transaction hash (in this case, `b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac`).
To get Blockstack to process it, you will need to mine some blocks in the test framework (by default,
Blockstack will only accept a transaction that has 6 confirmations). To do
this, simply go to `http://localhost:3001` and generate at least 6 blocks. If you
observe the test log, you will see the Blockstack node process and accept it.
Note that when you do this live, you should wait for
at least 10 confirmations before sending the `reveal` transaction, just to be
safe.
### Namespace reveal
The command to reveal a preordered namespace is more complicated, since it
describes the price curve.
This command is **interactive**. The command to invoke it is as follows:
```bash
|blockstack-test namespace_check| $ blockstack namespace_reveal hello "$PAYMENT_PKEY" "$CREATOR_PKEY" "b40dd1375ef63e5a40ee60d790ec6dccd06efcbac99d0cd5f3b07502a4ab05ac"
```
When running the command, you will see the namespace creation wizard prompt you
with the price curve and the current values:
```
Name lifetimes (blocks): infinite
Price coefficient: 4
Price base: 4
Price bucket exponents: [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
Non-alpha discount: 2
No-vowel discount: 5
Burn or receive fees? Receive to mr6nrMvvh44sR5MiX929mMXP5hqgaTr6fx
Name price formula:
(UNIT_COST = 10.0 satoshi):
buckets[min(len(name)-1, 15)]
UNIT_COST * coeff * base
cost(name) = -----------------------------------------------------
max(nonalpha_discount, no_vowel_discount)
Name price table:
| length | price | price, nonalpha | price, no vowel | price, both |
--------------------------------------------------------------------------
| 1 | 42949672960 | 8589934592 | 8589934592 | 8589934592 |
| 2 | 10737418240 | 5368709120 | 2147483648 | 2147483648 |
| 3 | 2684354560 | 1342177280 | 536870912 | 536870912 |
| 4 | 671088640 | 335544320 | 134217728 | 134217728 |
| 5 | 167772160 | 83886080 | 33554432 | 33554432 |
| 6 | 41943040 | 20971520 | 8388608 | 8388608 |
| 7 | 10485760 | 5242880 | 2097152 | 2097152 |
| 8 | 2621440 | 1310720 | 524288 | 524288 |
| 9 | 655360 | 327680 | 131072 | 131072 |
| 10 | 163840 | 81920 | 32768 | 32768 |
| 11 | 40960 | 20480 | 8192 | 8192 |
| 12 | 10240 | 5120 | 2048 | 2048 |
| 13 | 2560 | 1280 | 512 | 512 |
| 14 | 640 | 320 | 128 | 128 |
| 15 | 160 | 80 | 32 | 32 |
| 16+ | 40 | 20 | 10 | 10 |
What would you like to do?
(0) Set name lifetime in blocks (positive integer between 1 and 4294967295, or "infinite")
(1) Set price coefficient (positive integer between 1 and 255)
(2) Set base price (positive integer between 1 and 255)
(3) Set price bucket exponents (16 comma-separated integers, each between 1 and 15)
(4) Set non-alphanumeric character discount (positive integer between 1 and 15)
(5) Set no-vowel discount (positive integer between 1 and 15)
(6) Toggle collecting name fees (True: receive fees; False: burn fees)
(7) Show name price formula
(8) Show price table
(9) Done
(1-9)
```
All prices are in the "fundamental unit" of the underlying blockchain (i.e.
satoshis).
As the formula describes, the name's price is a function of:
* a fixed unit cost (`UNIT_COST`)
* a multiplicative constant coefficient (`coeff`)
* a fixed exponential base (`base`)
* a 16-element list of price buckets, indexed by the length of the name (`buckets`)
* a discount for having non-alphnumeric letters (`nonalpha_discount`)
* a discount for having no vowels in the name (`no_vowel_discount`)
You can use options 1 through 8 to play with the pricing function and examine
the name costs in the price table. Enter 9 to send the transaction itself.
Once you're happy, you can issue the namespace-reveal transaction. As with the
namespace-preorder transaction, you will get back a transaction hash, and your transaction will be
unconfirmed. Simply go to `http://localhost:3001` to generate some more blocks
to confirm your namespace-reveal.
Once you have confirmed your namespace-reveal transaction, you can
begin to populate your namespace with some initial names.
**Collecting Name Fees**
Blockstack 0.17 introduced the ability to create a namespace such that for the
first year of its existence (54595 blocks), all name registration and renewal
fees will be sent to the address of the _payment key_. In this example,
this is the address `mr6nrMvvh44sR5MiX929mMXP5hqgaTr6fx`.
The alternative is to
have all namespace fees sent to an unspendable burn address
(`1111111111111111111114oLvT2`). This is the case for the `.id` namespace,
for example.
After the year has passed, all future name registrations and renewal fees
will be sent to the unspendable burn address. This is to disincentivize
namespace squatters.
**Warnings**
* You must issue this command **within 144 blocks** of the namespace-preorder transaction. Otherwise, the preorder will expire and you will need to start over from scratch.
### Importing names
After sending the `reveal` transaction, you can populate your namespace with
some initial names. You can do so with the `name_import` command.
Suppose we want to import the name `example.hello` and assign it to an owner
whose public key address is `ms6Y32bcL5zhA57e8tf7awgVZuPxV8Xg8N`. Suppose also
that you wanted to give `example.hello` an initial zone file stored at
`/var/blockstack/zone_files/example.hello`. To do so, you would issue the
following command:
```bash
|blockstack-test namespace_check| $ blockstack name_import example.hello ms6Y32bcL5zhA57e8tf7awgVZuPxV8Xg8N /var/blockstack/zone_files/example.hello "$CREATOR_PKEY"
```
By default, you **must** use the private key you used to reveal the namespace
to import names (this is `$CREATOR_PKEY` in this example).
As with namespace-preorder and namespace-reveal, the transaction this command
generates will be unconfirmed. Simply go to `http://localhost:3001` to generate
some blocks to confirm it.
You can check the progress of the transaction with `blockstack info`, as follows:
```bash
|blockstack-test namespace_check| $ blockstack info
{
"cli_version": "0.17.0.8",
"consensus_hash": "b10fdd38a20a7e46555ce3a7f68cf95c",
"last_block_processed": 694,
"last_block_seen": 694,
"queues": {
"name_import": [
{
"confirmations": 1,
"name": "example.hello",
"tx_hash": "10f7dcd9d6963ef5d20d010f731d5d2ddb76163a083b9d7a2b9fd4515c7fe58c"
}
]
},
"server_alive": true,
"server_host": "localhost",
"server_port": 16264,
"server_version": "0.17.0.8"
}
```
The `confirmation` field indicates how deep in the blockchain the transaction is
at the time. Generating more blocks will increase its number of confirmations.
When you do this live,
**YOU SHOULD LEAVE YOUR COMPUTER RUNNING UNTIL THE `name_import` QUEUE IS EMPTY**.
Blockstack's background API daemon will monitor the transactions and upload the
name's zone file to the Blockstack Atlas network once it is confirmed.
But to do so, your computer must remain online. If you do not do this, then
the name will not have a zone file and will be unusable in the higher layers of
Blockstack-powered software (including Blockstack applications). However,
if your computer does go offline or reboots, you can recover by
restarting the Blockstack API daemon (with
`blockstack api start`). The daemon itself will pick up where it left off, and
replicate all zone files that have confirmed transactions.
After the zone file is uploaded, the name will be public and resolvable. You can re-import the
same name over and over, and give it a different address and/or zone file. Like
all other names, the Blockstack Atlas network will accept and propagate zone
files for imported names.
The owner of the address `ms6Y32bcL5zhA57e8tf7awgVZuPxV8Xg8N` will **not** be
able to issue any transactions for the name `example.hello` until the namespace
creator has sent the `ready` transaction.
#### Using multiple private keys for NAME_IMPORT
Bitcoin itself imposes limits on how fast you can send transactions from the
same key (limited by a maximum UTXO-chain length). To work around this,
Blockstack lets you import names by using up to 300 private keys. The private
keys you can use are BIP32 unhardened children of the namespace reveal key (i.e.
`$CREATOR_PKEY` in this example).
The first name you import **must** use the namespace reveal private key
(`$CREATOR_PKEY` in this example). However, all future names you import in this
namespace can use one of the 300 BIP32 keys.
To get the list of keys you can use, you can use the `make_import_keys` command:
```bash
|blockstack-test namespace_check| $ blockstack make_import_keys example hello "$CREATOR_PKEY"
aeda50305ada40aaf53f2d8921aa717f1ec71a0a3b9b4c6397b3877f6d45c46501 (n4DVTuLLv5J1Yc17AoRYY1GtxDAuLGAESr)
92ff179901819a1ec7d32997ce3bb0d9a913895d5850cc05146722847128549201 (mib2KNBGR4az8GiUmusBZexVBqb9YB2gm5)
cc5b6a454e2b614bfa18f4deb9a8e179ab985634d63b7fedfaa59573472d209b01 (mxE2iqV4jdpn4K349Gy424TvZp6MPqSXve)
9b0265e0ac8c3c24fe1d79a734b3661ec2b5c0c2619bb6342356572b8235910101 (n4rGz8hkXTscUGWCwZvahrkEh6LHZVQUoa)
e2585af250404b7918cf6c91c6fa67f3401c0d1ae66df2fafa8fa132f4b9350f01 (moGNpEpighqc6FnkqyNVJA9xtfTiStr5YU)
{
"status": true
}
```
(NOTE: in the test environment, you get only 5 keys in order to save time).
You can use any of these keys to import names.
#### Trying it out
Here's an example walkthrough of how to try this out in the test framework:
1. Import the first name, creating a zone file in the process:
```bash
|blockstack-test namespace_check| $ cat > /var/blockstack/zone_files/example.hello <<EOF
> $ORIGIN example.hello
> $TTL 3600
> _file URI 10 1 "file:///home/blockstack-test/example.hello"
> EOF
|blockstack-test namespace_check| $ blockstack name_import example.hello ms6Y32bcL5zhA57e8tf7awgVZuPxV8Xg8N /var/blockstack/zone_files/example.hello "$CREATOR_PKEY"
Import cost breakdown:
{
"name_import_tx_fee": {
"btc": 0.0003342,
"satoshis": 33420
},
"total_estimated_cost": {
"btc": 0.0003342,
"satoshis": 33420
},
"total_tx_fees": 33420
}
Importing name 'example.hello' to be owned by 'ms6Y32bcL5zhA57e8tf7awgVZuPxV8Xg8N' with zone file hash '05c302430f4ed0a24470abf9df7e264d517fd389'
Proceed? (y/N) y
{
"status": true,
"success": true,
"transaction_hash": "bd875f00f63bcb718bb22782c88c3edcbed79663f2f9152deab328c48746f103",
"value_hash": "05c302430f4ed0a24470abf9df7e264d517fd389"
}
```
2. Advance the test framework blockchain, so the indexer knows which import keys to expect:
```bash
# NOTE: you can also do this by going to http://localhost:3001 in your Web browser
|blockstack-test namespace_check| $ curl -X POST http://localhost:3001/nextblock
```
3. Make import keys:
```bash
|blockstack-test namespace_check| $ blocksatck make_import_keys hello "$CREATOR_PKEY"
aeda50305ada40aaf53f2d8921aa717f1ec71a0a3b9b4c6397b3877f6d45c46501 (n4DVTuLLv5J1Yc17AoRYY1GtxDAuLGAESr)
92ff179901819a1ec7d32997ce3bb0d9a913895d5850cc05146722847128549201 (mib2KNBGR4az8GiUmusBZexVBqb9YB2gm5)
cc5b6a454e2b614bfa18f4deb9a8e179ab985634d63b7fedfaa59573472d209b01 (mxE2iqV4jdpn4K349Gy424TvZp6MPqSXve)
9b0265e0ac8c3c24fe1d79a734b3661ec2b5c0c2619bb6342356572b8235910101 (n4rGz8hkXTscUGWCwZvahrkEh6LHZVQUoa)
e2585af250404b7918cf6c91c6fa67f3401c0d1ae66df2fafa8fa132f4b9350f01 (moGNpEpighqc6FnkqyNVJA9xtfTiStr5YU)
{
"status": true
}
```
4. Fill up one of the addresses in the test framework, so we can fund `NAME_IMPORT` transactions with it:
```bash
# NOTE: you can also do this by going to http://localhost:3001 in your Web browser
|blockstack-test namespace_check| $ curl -X POST -F 'addr=n4DVTuLLv5J1Yc17AoRYY1GtxDAuLGAESr' -F 'value=100000000' 'http://localhost:3001/sendfunds'
```
5. Import another name, with the child private key we just funded:
```bash
|blockstack-test namespace_check| $ cat > /tmp/example.hello.zonefile <<EOF
> $ORIGIN example2.hello
> $TTL 3600
> _file URI 10 1 "file:///home/blockstack-test/example2.hello"
> EOF
|blockstack-test namespace_check| $ blockstack name_import example2.hello n3sFkNfBQPWS25G12DqDEqHRPiqHotAkEb /tmp/example.hello.zonefile aeda50305ada40aaf53f2d8921aa717f1ec71a0a3b9b4c6397b3877f6d45c46501
Import cost breakdown:
{
"name_import_tx_fee": {
"btc": 0.0003342,
"satoshis": 33420
},
"total_estimated_cost": {
"btc": 0.0003342,
"satoshis": 33420
},
"total_tx_fees": 33420
}
Importing name 'example2.hello' to be owned by 'n3sFkNfBQPWS25G12DqDEqHRPiqHotAkEb' with zone file hash '0649bc0b457f54c564d054ce20dc3745a0c4f0c0'
Proceed? (y/N) y
{
"status": true,
"success": true,
"transaction_hash": "496a6c2aaccedd98a8403c2e61ff3bdeff221a58bf0e9c362fcae981353f459f",
"value_hash": "0649bc0b457f54c564d054ce20dc3745a0c4f0c0"
}
```
6. Advance the blockchain again:
```bash
# NOTE: you can also do this by going to http://localhost:3001 in your Web browser
|blockstack-test namespace_check| $ curl -X POST http://localhost:3001/nextblock
```
7. See that the names are processing:
```bash
|blockstack-test namespace_check| $ blockstack info
{
"cli_version": "0.17.0.8",
"consensus_hash": "2a055beeaedcaa1365ab2671a0254a03",
"last_block_processed": 711,
"last_block_seen": 711,
"queues": {
"name_import": [
{
"confirmations": 2,
"name": "example.hello",
"tx_hash": "bd875f00f63bcb718bb22782c88c3edcbed79663f2f9152deab328c48746f103",
},
{
"confirmations": 1,
"name": "example2.hello",
"tx_hash": "496a6c2aaccedd98a8403c2e61ff3bdeff221a58bf0e9c362fcae981353f459f"
}
]
},
"server_alive": true,
"server_host": "localhost",
"server_port": 16264,
"server_version": "0.17.0.8"
}
```
8. Confirm all the transactions:
```bash
# NOTE: you can also do this by going to http://localhost:3001 in your Web browser
|blockstack-test namespace_check| $ for i in $(seq 1 10); do curl -X POST http://localhost:3001/nextblock
```
9. Look up name zone files to confirm they were replicated to the test framework's Atlas network:
```bash
|blockstack-test namespace_check| $ blockstack info
{
"cli_version": "0.17.0.8",
"consensus_hash": "ad247c1d5ff239a65db0736951078f17",
"last_block_processed": 721,
"last_block_seen": 721,
"queues": {},
"server_alive": true,
"server_host": "localhost",
"server_port": 16264,
"server_version": "0.17.0.8"
}
|blockstack-test namespace_check| $ blockstack get_name_zonefile example.hello
$ORIGIN example.hello
$TTL 3600
_file URI 10 1 "file:///home/blockstack-test/example.hello"
|blockstack-test namespace_check| $ blockstack get_name_zonefile example2.hello
$ORIGIN example2.hello
$TTL 3600
_file URI 10 1 "file:///home/blockstack-test/example2.hello"
```
Now, these names are imported and once the `NAMESPACE_READY` transaction is
sent, the name owners can proceed to issue name operations.
**Warnings**
* The first private key you use must be the same one you used to *create* the namespace (`$CREATOR_KEY`).
* You may only use the 300 private keys described above to import names.
* You must complete all `NAME_IMPORT` transactions within 52595 blocks of the `NAMESPACE_REVEAL` transaction (about 1 year).
### Launching a Namespace
Once you have pre-populated your namespace with all of the initial names, you
have to make it `ready` so anyone can register a name. If you do not do this
within 1 year of the `reveal` transaction, then your namespace and all of the
names will disappear, and someone else will be able to register it.
To make a namespace `ready`, you use the creator private key as follows:
```bash
|blockstack-test namespace_check| $ blockstack namespace_ready hello "$CREATOR_PKEY"
```
**Warnings**
* You must send the `NAMESPACE_READY` transaction within 52595 blocks (about 1 year) of the `NAMESPACE_REVEAL` transaction.

1
core/basic_usage.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

818
core/blockstack-did-spec.md

@ -1,818 +0,0 @@
# Blockstack DID Method Specification
# Abstract
Blockstack is a network for decentralized applications where users own their
identities and data. Blockstack utilizes a public blockchain to implement a
decentralized [naming layer](https://docs.blockstack.org/core/naming/introduction.html),
which binds a user's human-readable username to their current public key and a pointer to
their data storage buckets. The naming layer ensures that names are globally
unique, that names can be arbitrary human-meaningful strings, and that names
are owned and controlled by cryptographic key pairs such that only the owner of the private key
can update the name's associated state.
The naming layer implements DIDs as a mapping
between the initial name operation for a user's name and the name's current
public key. The storage pointers in the naming layer are leveraged to point to
the authoritative replica of the user's DID document.
# Status of This Document
This document is not a W3C Standard nor is it on the W3C Standards Track. This is
a draft document and may be updated, replaced or obsoleted by other documents at
any time. It is inappropriate to cite this document as other than work in
progress.
Comments regarding this document are welcome. Please file issues directly on
[Github](https://github.com/blockstack/blockstack-core/blob/master/docs/did-spec.md).
# 1. System Overview
Blockstack's DID method is specified as part of its decentralized naming system.
Each name in Blockstack has one or more corresponding DIDs, and each Blockstack
DID corresponds to exactly one name -- even if the name was revoked by its
owner, expired, or was re-registered to a different owner.
Blockstack is unique among decentralized identity systems in that it is *not*
anchored to a specific blockchain or DLT implementation. The system is designed
from the ground up to be portable, and has already been live-migrated from the
Namecoin blockchain to the Bitcoin blockchain. The operational ethos of
Blockstack is to leverage the must secure blockchain at all times -- that is,
the one that is considered hardest to attack.
Blockstack's naming system and its DIDs transcend the underlying blockchain, and
will continue to resolve to DID document objects (DDOs) even if the system
migrates to a new blockchain in the future.
## 1.1 DID Lifecycle
Understanding how Blockstack DIDs operate requires understanding how Blockstack
names operate. Fundamentally, a Blockstack DID is defined as a pointer to the
*nth name registered by an address.* How this information is determined depends
on the category of name being registered -- a DID can be derived from an
*on-chain* name or an *off-chain* name. We call these DIDs *on-chain DIDs* and
*off-chain DIDs*, respectively.
### 1.1.1 On-Chain DIDs
On-chain names are written directly to Blockstack's underlying blockchain.
Instantiating an on-chain name requires two transactions -- a `NAME_PREORDER`
transaction, and a `NAME_REGISTRATION` transaction. Upon successful
confirmation of the `NAME_REGISTRATION` transaction, the system assigns name to
an on-chain address indicated by the `NAME_REGISTRATION` transaction itself.
This address is the name's *owner*.
Since these transactions are written to the blockchain, the blockchain provides
a total ordering of name-to-address assignments. Thus, a DID instantiated for an
on-chain name may be referenced by the name's owner address, and the number of
names ever assigned to the owner address *at the time of this DID's
instantiation*. For example, the DID
`did:stack:v0:15gxXgJyT5tM5A4Cbx99nwccynHYsBouzr-3` was instantiated when the
fourth on-chain name was created and initially assigned to the address `15gxXgJyT5tM5A4Cbx99nwccynHYsBouzr`
(note that the index parameter -- the `-3` -- starts counting from 0).
### 1.1.2 Off-chain DIDs
Off-chain names, sometimes called *subdomains* in the Blockstack literature,
refer to names whose transaction histories are instantiated and stored outside
of Blockstack's blockchain within Blockstack's Atlas peer network. Off-chain
name transactions are encoded in batches, where each batch is hashed and written
to the underlying blockchain through a transaction for an on-chain name. This
provides them with the same safety properties as on-chain names -- off-chain
names are globally unique, off-chain names can be arbitrary human-meaningful
strings, off-chain names are owned by cryptographic key pairs, and all
Blockstack nodes see the same linearized history of off-chain name operations.
Off-chain names are instantiated by an on-chain name, indicated by the off-chain
name's suffix. For example, `cicero.res_publica.id` is an off-chain name
whose initial transaction history is processed by the owner of the on-chain
name `res_publica.id`. Note that the owner of `res_publica.id` does *not* own
`cicero.res_publica.id`, and cannot issue well-formed name updates to it.
Off-chain names -- and by extension, their corresponding DIDs -- have
different liveness properties than on-chain names. The Blockstack
naming system protocol requires the owner of `res_publica.id` to not only
propagate the signed transactions that instantiate and transfer ownership of
`cicero.res_publica.id`. However, *any* on-chain name can process a name update
for an off-chain name -- that is, an update that changes where the name's
assocaited state resides. For details as to why this is the case, please refer
to the [Blockstack subdomain documentation](https://docs.blockstack.org/core/naming/subdomains.html).
An off-chain DID is similarly structured to an on-chain DID. Like on-chain
names, each off-chain name is owned by an address (but not necessarily an
address on the blockchain), and each Blockstack node sees the same sequence of
off-chain name-to-address assignments. Thus, it has enough information to
assign each off-chain name user a DID.
# 2. Blockstack DID Method
The namestring that shall identify this DID method is: `stack`
A DID that uses this method *MUST* begin with the following literal prefix: `did:stack`.
The remainder of the DID is its namespace-specific identifier.
# 2.1 Namespace-Specific Identifier
The namespace-specific identifier (NSI) of the Blockstack DID encodes two pieces
of information: an address, and an index.
The **address** shall be a base58check encoding of a version byte concatenated with
the RIPEMD160 hash of a SHA256 hash of a DER-encoded secp256k1 public key.
For example, in this Python 2 snippit:
```python
import hashlib
import base58
pubkey = '042bc8aa4eb54d779c1fb8a2d5022aec8ed7fc2cc34d57356d9e1c417ce416773f45b0299ea7be347d14c69c403d9a03c8ec0ccf47533b4bee8cd002e5de81f945'
sha256_pubkey = hashlib.sha256(pubkey.decode('hex')).hexdigest()
# '18328b13b4df87cbcd190c083ef1d74487fc1383792f208f52c596b4588fb665'
ripemd160_sha256_pubkey = hashlib.new('ripemd160', sha256_pubkey.decode('hex')).hexdigest()
# '1651c1a6001d4750e46be8a02cc19550d4309b71'
version_byte = '\x00'
address = base58.b58check_encode(version_byte + ripemd160_sha256_pubkey.decode('hex'))
# '1331okvQ3Jr2efzaJE42Supevzfzg8ahYW'
```
The **index** shall be a non-negative monotonically-increasing integer.
The (address, index) pair uniquely identifies a DID. Blockstack's naming system
ensures that the index increments monotonically each time a DID is instantiated
(e.g. by incrementing it each time a name gets registered to the address).
## 2.2 Address Encodings
The address's version byte encodes whether or not a DID corresponds to an
on-chain name transaction or an off-chain name transaction, and whether or not
it corresponds to a mainnet or testnet address. The version bytes for each
configuration shall be as follows:
* On-chain names on mainnet: `0x00`
* On-chain names on testnet: `0x6f`
* Off-chain names on mainnet: `0x3f`
* Off-chain names on testnet: `0x7f`
For example, the RIPEMD160 hash `1651c1a6001d4750e46be8a02cc19550d4309b71` would
encode to the following base58check strings:
* On-chain mainnet: `1331okvQ3Jr2efzaJE42Supevzfzg8ahYW`
* On-chain testnet: `mhYy6p1NrLHHRnUC1o2QGq2ynzGhduVoEX`
* Off-chain mainnet: `SPL1qbhYmg3EAyn2qf36zoyDamuRXm2Gjk`
* Off-chain testnet: `t8xcrYmzDDhJWihaQWMW2qPZs4Po1PfvCB`
# 3. Blockstack DID Operations
## 3.1 Creating a Blockstack DID
Creating a Blockstack DID requires registering a name -- be it on-chain or
off-chain. To register an on-chain name, the user must send two transactions to
Blockstack's underlying blockchain (currently Bitcoin) that implement the
`NAME_PREORDER` and `NAME_REGISTRATION` commands. Details on the wire formats
for these transactions can be found in Appendix A. Blockstack supplies both a
[graphical tool](https://github.com/blockstack/blockstack-browser) and a
[command-line tool](https://github.com/blockstackl/cli-blockstack) for
generating and broadcasting these transactions, as well as a
[reference library](https://github.com/blockstack/blockstack.js).
To register an off-chain name, the user must be able to submit a request to an
off-chain registrar. Anyone with an on-chain name can use it to operate a
registrar for off-chain names. A reference implementation can be found
[here](https://github.com/blockstack/subdomain-registrar).
To register an off-chain DID, the user
must submit a JSON body as a HTTP POST request to the registrar's
registration endpoint with the following format:
```
{
"zonefile": "<zonefile encoding the location of the DDO>",
"name": "<off-chain name, excluding the on-chain suffix>",
"owner_address": "<b58check-encoded address that will own the name, with version byte \x00>",
}
```
For example, to register the name `spqr` on a registrar for `res_publica.id`:
```bash
$ curl -X POST -H 'Authorization: bearer API-KEY-IF-USED' -H 'Content-Type: application/json' \
> --data '{"zonefile": "$ORIGIN spqr\n$TTL 3600\n_https._tcp URI 10 1 \"https://gaia.blockstack.org/hub/1HgW81v6MxGD76UwNbHXBi6Zre2fK8TwNi/profile.json\"\n", \
> "name": "spqr", \
> "owner_address": "1HgW81v6MxGD76UwNbHXBi6Zre2fK8TwNi"}' \
> http://localhost:3000/register/
```
The `zonefile` field must be a well-formed DNS zonefile, and must have the
following properties:
* It must have its `$ORIGIN` field set to the off-chain name.
* It must have at least one `URI` resource record that encodes an HTTP or
HTTPS URL. Note that its name must be either `_http._tcp` or `_https._tcp`, per the
`URI` record specification.
* The HTTP or HTTPS URL must resolve to a JSON Web token signed by a secp256k1 public key
that hashes to the `owner_address` field, per section 2.1.
Once the request is accepted, the registrar will issue a subsequent `NAME_UPDATE`
transaction for its on-chain name and broadcast the batch of off-chain zone
files it has accumulated to the Blockstack Atlas network (see Appendix A). The batch
of off-chain names' zone files will be hashed, and the hash will be written to
the blockchain as part of the `NAME_UPDATE`. This proves the existence of these
off-chain names, as well as their corresponding DIDs.
Once the transaction confirms and the off-chain zone files are propagated to the
peer network, any Blockstack node will be able to resolve the off-chain name's associated DID.
## 3.2 Storing a Blockstack DID's DDO
Each name in Blockstack, and by extention, each DID, must have one or more
associated URLs. To resolve a DID (section 3.3), the DID's URLs must point to
a well-formed signed DDO. It is up to the DID owner to sign and upload the DDO
to the relevant location(s) so that DID resolution works as expected, and it is
up to the DID owner to ensure that the DDO is well-formed. Resolvers should
validate DDOs before returning them to clients.
In order for a DID to resolve to a DDO, the DDO must be encoded as a JSON web
token, and must be signed by the secp256k1 private key whose public key hashes
to the DID's address. This is used by the DID resolver to authenticate the DDO,
thereby removing the need to trust the server(s) hosting the DDO with replying
authentic data.
## 3.3 Resolving a Blockstack DID
Any Blockstack node with an up-to-date view of the underlying blockchain and a
complete set of off-chain zone files can translate any name into its DID, and
translate any DID into its name.
Since DID registration in Blockstack is achieved by first registering a name,
the user must first determine the DID's NSI. To do so, the user simply requests
it from a Blockstack node of their choice as a GET request to the node's
`/v1/dids/{:blockstack_did}` endpoint. The response must be a JSON object with
a `public_key` field containing the secp256k1 public key that hashes to the
DID's address, and a `document` field containing the DDO. The DDO's `publicKey` field
shall be an array of objects with one element, where the
only element describes the `public_key` given in the top-level object.
For example:
```bash
$ curl -s https://core.blockstack.org/v1/dids/did:stack:v0:15gxXgJyT5tM5A4Cbx99nwccynHYsBouzr-0 | jq
{
'public_key': '022af593b4449b37899b34244448726aa30e9de13c518f6184a29df40823d82840',
'document': {
...
'@context': 'https://w3id.org/did/v1',
'publicKey': [
{
'id': 'did:stack:v0:15gxXgJyT5tM5A4Cbx99nwccynHYsBouzr-0',
'type': 'secp256k1',
'publicKeyHex': '022af593b4449b37899b34244448726aa30e9de13c518f6184a29df40823d82840'
}
],
...
}
}
```
## 3.4 Updating a Blockstack DID
The user can change their DDO at any time by uploading a new signed DDO to the
relevant locations, per section 3.2, *except for* the `publicKey` field. In
order to change the DID's public key, the user must transfer the underlying name
to a new address.
If the DID corresponds to an on-chain name, then the user must send a
`NAME_TRANSFER` transaction to send the name to the new address. Once the
transaction is confirmed by the Blockstack network, the DID's public key will be
updated. See Appendix A for the `NAME_TRANSFER` wire format. Blockstack
provides a [reference library](https://github.com/blockstack/blockstack.js) for
generating this transaction.
### 3.4.1 Off-Chain DID Updates
If the DID corresponds to an off-chain name, then the user must request that the
registrar that instantiated the name to broadcast an off-chain name transfer
operation. To do so, the user must submit a string with the following format to
the registrar:
```
${name} TXT "owner=${new_address}" "seqn=${update_counter}" "parts=${length_of_zonefile_base64}" "zf0=${base64_part_0}" "zf1=${base64_part_1}" ... "sig=${base64_signature}"
```
The string is a well-formed DNS TXT record with the following fields:
* The `${name}` field is the subdomain name without the on-chain suffix (e.g.
`spqr` in `spqr.res_publica.id`.
* The `${new_address}` field is the new owner address of the subdomain name.
* The `${update_counter}` field is a non-negative integer equal to the number of
subdomain name operations that have occurred so far. It starts with 0 when
the name is created, and must increment each time the name owner issues an
off-chain name operation.
* The `${length_of_zonefile_base64}` field is equal to the length of the
base64-encoded zone file for the off-chain name.
* The fields `zf0`, `zf1`, `zf2`, etc. and their corresponding variables
`${base64_part_0}`, `${base64_part_1}`, `${base64_part_2}`, etc. correspond to
256-byte segments of the base64-encoded zone file. They must occur in a
sequence of `zf${n}` where `${n}` starts at 0 and increments by 1 until all
segments of the zone file are represented.
* The `${base64_signature}` field is a secp256k1 signature over the resulting
string, up to the `sig=` field, and base64-encoded. The signature must come
from the secp256k1 private key that currently owns the name.
Thus to generate this TXT record for their DID, the user would do the following:
1. Base64-encode the off-chain DID's zone file.
2. Break the base64-encoded zone file into 256-byte segments.
3. Assemble the TXT record from the name, new address, update counter, and zone
file segments.
4. Sign the resulting string with the DID's current private key.
5. Generate and append the `sig=${base64_signature}` field to the TXT record.
Sample code to generate these TXT records can be found in the [Stacks Node
reference implementation](https://github.com/blockstack/blockstack-core), under
the `blockstack.lib.subdomains` package. For example, the Python 2 program here
generates such a TXT record:
```python
import blockstack
offchain_name = 'bar'
onchain_name = 'foo.test'
new_address = '1Jq3x8BAYz9Xy9AMfur5PXkDsWtmBBsNnC'
seqn = 1
privk = 'da1182302fee950e64241a4103646992b1bed7f6c4ced858282e493d57df33a501'
full_name = '{}.{}'.format(offchain_name, onchain_name)
zonefile = "$ORIGIN {}\n$TTL 3600\n_http._tcp\tIN\tURI\t10\t1\t\"https://gaia.blockstack.org/hub/{}/profile.json\"\n\n".format(offchain_name, new_address)
print blockstack.lib.subdomains.make_subdomain_txt(full_name, onchain_name, new_address, seqn, zonefile, privk)
```
The program prints a string such as:
```
bar TXT "owner=1Jq3x8BAYz9Xy9AMfur5PXkDsWtmBBsNnC" "seqn=1" "parts=1" "zf0=JE9SSUdJTiBiYXIKJFRUTCAzNjAwCl9odHRwLl90Y3AJSU4JVVJJCTEwCTEJImh0dHBzOi8vZ2FpYS5ibG9ja3N0YWNrLm9yZy9odWIvMUpxM3g4QkFZejlYeTlBTWZ1cjVQWGtEc1d0bUJCc05uQy9wcm9maWxlLmpzb24iCgo\=" "sig=QEA+88Nh6pqkXI9x3UhjIepiWEOsnO+u1bOBgqy+YyjrYIEfbYc2Q8YUY2n8sIQUPEO2wRC39bHQHAw+amxzJfkhAxcC/fZ0kYIoRlh2xPLnYkLsa5k2fCtXqkJAtsAttt/V"
```
(Note that the `sig=` field will differ between invocations, due to the way
ECDSA signatures work).
Once this TXT record has been submitted to the name's original registrar, the
registrar will pack it along with other such records into a single zone file,
and issue a `NAME_UPDATE` transaction for the on-chain name to announce them to
the rest of the peer network. The registrar will then propagate these TXT
records to the peer network once the transaction confirms, thereby informing all
Blockstack nodes of the new state of the off-chain DID.
### 3.4.2 Changing the Storage Locations of a DDO
If the user wants to change where the resolver will look for a DDO, they must do
one of two things. If the DID corresponds to an on-chain name, then the user
must send a `NAME_UPDATE` transaction for the underlying name, whose 20-byte
hash field is the RIPEMD160 hash of the name's new zone file. See Appendix A
for the wire format of `NAME_UPDATE` transactions.
If the DID corresponds to an off-chain name, then the user must submit a request
to an off-chain name registrar to propagate a new zone file for the name.
Unlike changing the public key, the user can ask *any* off-chain registrar to
broadcast a new zone file. The method for doing this is described in section
3.4.1 -- the user simply changes the zone file contents instead of the address.
# 4. Deleting a Blockstack DID
If the user wants to delete their DID, they can do so by revoking the underlying
name. To do this with an on-chain name, the user constructs and broadcasts a
`NAME_REVOKE` transaction. Once confirmed, the DID will stop resolving.
To do this with an off-chain name, the user constructs and broadcasts a TXT
record for their DID's underlying name that (1) changes the owner address to a
"nothing-up-my-sleeve" address (such as `1111111111111111111114oLvT2` -- the
base58-check encoding of 20 bytes of 0's), and (2) changes the zone file to
include an unresolvable URL. This prevents the DID from resolving, and prevents
it from being updated.
# 5. Security Considerations
This section briefly outlines possible ways to attack Blockstack's DID method,
as well as countermeasures the Blockstack protocol and the user can take to
defend against them.
## 5.1 Public Blockchain Attacks
Blockstack operates on top of a public blockchain, which could be attacked by a
sufficiently pwowerful adversary -- such as rolling back and changing the chain's
transaction history, denying new transactions for Blockstack's name
operations, or eclipsing nodes.
Blockstack makes the first two attacks difficult by operating on top of the most
secure blockchain -- currently Bitcoin. If the blockchain is attacked, or a
stronger blockchain comes into being, the Blockstack community would migrate the
Blockstack network to a new blockchain.
The underlying blockchain provides some immunity towards eclipse attacks, since a
blockchain peer expects blocks to arrive at roughly fixed intervals and expects
blocks to have a proof of an expenditure of an expensive resource (like
electricity). In Bitcoin's case, the computational difficulty of finding new blocks puts a
high lower bound on the computational effort required to eclipse a Bitcoin node --
in order to sustain 10-minute block times, the attacker must expend an equal
amount of energy as the rest of the network. Moreover, the required expenditure
rate (the "chain difficulty") decreases slowly enough that an attacker with less
energy would have to spend months of time on the attack, giving the victim
ample time to detect it. The countermeasures the blockchain employs to deter
eclipse attacks are beyond the scope of this document, but it is worth pointing
out that Blockstack's DID method benefits from them since they also help ensure
that DID creation, updates and deletions get processed in a timely manner.
## 5.2 Blockstack Peer Network Attacks
Because Blockstack stores each DID's DDO's URL in its own peer network outside
of its underlying blockchain, it is possible to eclipse Blockstack nodes and
prevent them from seeing both off-chain DID operations and updates to on-chain
DIDs. In an effort to make this as difficult as possible, the
Blockstack peer network implements an unstructured overlay network -- nodes select
a random sample of the peer graph as their neighbors. Moreover, Blockstack
nodes strive to fetch a full replica of all zone files, and pull zone files from
their neighbors in rarest-first order to prevent zone files from getting lost
while they are propagating. This makes eclipsing a node
maximally difficult -- an attacker would need to disrupt all of a the victim
node's neighbor links.
In addition to this protocol-level countermeasure, a user has the option of
uploading zone files manually to their preferred Blockstack nodes. If
vigilent users have access to a replica of the zone files, they can re-seed
Blockstack nodes that do not have them.
## 5.3 Stale Data and Replay Attacks
A DID's DDO is stored on a 3rd party storage provider. The DDO's public key is
anchored to the blockchain, which means each time the DDO public key changes,
all previous DDOs are invalidated. Similarly, the DDO's storage provider URLs
are anchored to the blockchain, which means each time the DID's zone file
changes, any stale DDOs will no longer be fetched. However, if the user changes
other fields of their DDO, a malicious storage provider or a network adversary
can serve a stale but otherwise valid DDO and the resolver will accept it.
The user has a choice of which storage providers host their DDO. If the storage
provider serves stale data, the user can and should change their storage
provider to one that will serve only fresh data. In addition, the user should
use secure transport protocols like HTTPS to make replay attacks on the network
difficult. For use cases where these are not sufficient to prevent replay
attacks, the user should change their zone file and/or public key each time they
change their DDO.
# 6. Privacy Considerations
Blockstack's DIDs are underpinned by Blockstack IDs (human readable
names), and every Blockstack node records where every DID's DDO is
hosted. However, users have the option of encrypting their DDOs so that only a
select set of other users can decrypt them.
Blockstack's peer network and DID resolver use HTTP(S), meaning that
intermediate middleboxes like CDNs and firewalls can cache data and log
requests.
# 7. Reference Implementations
Blockstack implements a [RESTful API](https://core.blockstack.org) for querying
DIDs. It also implements a [reference
library](https://github.com/blockstack/blockstack.js) for generating well-formed
on-chain transactions, and it implements a [Python
library](https://github.com/blockstack/blockstack/core/blob/master/blockstack/lib/subdomains.py)
for generating off-chain DID operations. The Blockstack node [reference
implementation](https://github.com/blockstack/blockstack-core) is available
under the terms of the General Public Licence, version 3.
# 8. Resources
Many Blockstack developers communicate via the [Blockstack
Forum](https://forum.blockstack.org) and via the [Blockstack
Slack](https://blockstack.slack.com). Interested developers are encouraged to
join both.
# Appendix A: On-chain Wire Formats
This section is for organizations who want to be able to create and send name operation
transactions to the blockchain(s) Blockstack supports.
It describes the transaction formats for the Bitcoin blockchain.
Only the transactions that affect DID creation, updates, resolution, and
deletions are documented here. A full listing of all Blockstack transaction
formats can be found
[here](https://github.com/blockstack/blockstack-core/blob/master/docs/wire-format.md).
## Transaction format
Each Bitcoin transaction for Blockstack contains signatures from two sets of keys: the name owner, and the payer. The owner `scriptSig` and `scriptPubKey` fields are generated from the key(s) that own the given name. The payer `scriptSig` and `scriptPubKey` fields are used to *subsidize* the operation. The owner keys do not pay for any operations; the owner keys only control the minimum amount of BTC required to make the transaction standard. The payer keys only pay for the transaction's fees, and (when required) they pay the name fee.
This construction is meant to allow the payer to be wholly separate from the owner. The principal that owns the name can fund their own transactions, or they can create a signed transaction that carries out the desired operation and request some other principal (e.g. a parent organization) to actually pay for and broadcast the transaction.
The general transaction layout is as follows:
| **Inputs** | **Outputs** |
| ------------------------ | ----------------------- |
| Owner scriptSig (1) | `OP_RETURN <payload>` (2) |
| Payment scriptSig | Owner scriptPubKey (3) |
| Payment scriptSig... (4) |
| ... (4) | ... (5) |
(1) The owner `scriptSig` is *always* the first input.
(2) The `OP_RETURN` script that describes the name operation is *always* the first output.
(3) The owner `scriptPubKey` is *always* the second output.
(4) The payer can use as many payment inputs as (s)he likes.
(5) At most one output will be the "change" `scriptPubKey` for the payer.
Different operations require different outputs.
## Payload Format
Each Blockstack transaction in Bitcoin describes the name operation within an `OP_RETURN` output. It encodes name ownership, name fees, and payments as `scriptPubKey` outputs. The specific operations are described below.
Each `OP_RETURN` payload *always* starts with the two-byte string `id` (called the "magic" bytes in this document), followed by a one-byte `op` that describes the operation.
### NAME_PREORDER
Op: `?`
Description: This transaction commits to the *hash* of a name. It is the first
transaction of two transactions that must be sent to register a name in BNS.
Example: [6730ae09574d5935ffabe3dd63a9341ea54fafae62fde36c27738e9ee9c4e889](https://www.blocktrail.com/BTC/tx/6730ae09574d5935ffabe3dd63a9341ea54fafae62fde36c27738e9ee9c4e889)
`OP_RETURN` wire format:
```
0 2 3 23 39
|-----|--|--------------------------------------------------|--------------|
magic op hash_name(name.ns_id,script_pubkey,register_addr) consensus hash
```
Inputs:
* Payment `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* Payment `scriptPubkey` script for change
* `p2pkh` `scriptPubkey` to the burn address (0x00000000000000000000000000000000000000)
Notes:
* `register_addr` is a base58check-encoded `ripemd160(sha256(pubkey))` (i.e. an address). This address **must not** have been used before in the underlying blockchain.
* `script_pubkey` is either a `p2pkh` or `p2sh` compiled Bitcoin script for the payer's address.
### NAME_REGISTRATION
Op: `:`
Description: This transaction reveals the name whose hash was announced by a
previous `NAME_PREORDER`. It is the second of two transactions that must be
sent to register a name in BNS.
When this transaction confirms, the corresponding Blockstack DID will be
instantiated. It's address will be the owner address in this transaction, and
its index will be equal to the number of names registered to this address previously.
Example: [55b8b42fc3e3d23cbc0f07d38edae6a451dfc512b770fd7903725f9e465b2925](https://www.blocktrail.com/BTC/tx/55b8b42fc3e3d23cbc0f07d38edae6a451dfc512b770fd7903725f9e465b2925)
`OP_RETURN` wire format (2 variations allowed):
Variation 1:
```
0 2 3 39
|----|--|-----------------------------|
magic op name.ns_id (37 bytes)
```
Variation 2:
```
0 2 3 39 59
|----|--|----------------------------------|-------------------|
magic op name.ns_id (37 bytes, 0-padded) value
```
Inputs:
* Payer `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* `scriptPubkey` for the owner's address
* `scriptPubkey` for the payer's change
Notes:
* Variation 1 simply registers the name. Variation 2 will register the name and
set a name value simultaneously. This is used in practice to set a zone file
hash for a name without the extra `NAME_UPDATE` transaction.
* Both variations are supported. Variation 1 was designed for the time when
Bitcoin only supported 40-byte `OP_RETURN` outputs.
### NAME_RENEWAL
Op: `:`
Description: This transaction renews a name in BNS. The name must still be
registered and not expired, and owned by the transaction sender.
Depending on which namespace the name was created in, you may never need to
renew a name. However, in namespaces where names expire (such as `.id`), you
will need to renew your name periodically to continue using its associated DID.
If this is a problem, we recommend creating a name in a namespace without name
expirations, so that `NAME_UPDATE`, `NAME_TRANSFER` and `NAME_REVOKE` -- the operations that
underpin the DID's operations -- will work indefinitely.
Example: [e543211b18e5d29fd3de7c0242cb017115f6a22ad5c6d51cf39e2b87447b7e65](https://www.blocktrail.com/BTC/tx/e543211b18e5d29fd3de7c0242cb017115f6a22ad5c6d51cf39e2b87447b7e65)
`OP_RETURN` wire format (2 variations allowed):
Variation 1:
```
0 2 3 39
|----|--|-----------------------------|
magic op name.ns_id (37 bytes)
```
Variation 2:
```
0 2 3 39 59
|----|--|----------------------------------|-------------------|
magic op name.ns_id (37 bytes, 0-padded) value
```
Inputs:
* Payer `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* `scriptPubkey` for the owner's addess. This can be a different address than
the current name owner (in which case, the name is renewed and transferred).
* `scriptPubkey` for the payer's change
* `scriptPubkey` for the burn address (to pay the name cost)
Notes:
* This transaction is identical to a `NAME_REGISTRATION`, except for the presence of the fourth output that pays for the name cost (to the burn address).
* Variation 1 simply renews the name. Variation 2 will both renew the name and
set a new name value (in practice, the hash of a new zone file).
* Both variations are supported. Variation 1 was designed for the time when
Bitcoin only supported 40-byte `OP_RETURN` outputs.
* This operation can be used to transfer a name to a new address by setting the
second output (the first `scriptPubkey`) to be the `scriptPubkey` of the new
owner key.
### NAME_UPDATE
Op: `+`
Description: This transaction sets the name state for a name to the given
`value`. In practice, this is used to announce new DNS zone file hashes to the [Atlas
network](https://docs.blockstack.org/core/atlas/overview.html), and in doing so,
change where the name's off-chain state resides. In DID terminology, this
operation changes where the authoritative replica of the DID's DDO will be
retrieved on the DID's lookup.
Example: [e2029990fa75e9fc642f149dad196ac6b64b9c4a6db254f23a580b7508fc34d7](https://www.blocktrail.com/BTC/tx/e2029990fa75e9fc642f149dad196ac6b64b9c4a6db254f23a580b7508fc34d7)
`OP_RETURN` wire format:
```
0 2 3 19 39
|-----|--|-----------------------------------|-----------------------|
magic op hash128(name.ns_id,consensus hash) zone file hash
```
Note that `hash128(name.ns_id, consensus hash)` is the first 16 bytes of a SHA256 hash over the name concatenated to the hexadecimal string of the consensus hash (not the bytes corresponding to that hex string).
See the [Method Glossary](#method-glossary) below.
Example: `hash128("jude.id" + "8d8762c37d82360b84cf4d87f32f7754") == "d1062edb9ec9c85ad1aca6d37f2f5793"`.
The 20 byte zone file hash is computed from zone file data by using `ripemd160(sha56(zone file data))`
Inputs:
* owner `scriptSig`
* payment `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* owner's `scriptPubkey`
* payment `scriptPubkey` change
### NAME_TRANSFER
Op: `>`
Description: This transaction changes the public key hash that owns the name in
BNS. When the name or its DID is looked up after this transaction confirms, the
resolver will list the new public key as the owner.
Example: [7a0a3bb7d39b89c3638abc369c85b5c028d0a55d7804ba1953ff19b0125f3c24](https://www.blocktrail.com/BTC/tx/7a0a3bb7d39b89c3638abc369c85b5c028d0a55d7804ba1953ff19b0125f3c24)
`OP_RETURN` wire format:
```
0 2 3 4 20 36
|-----|--|----|-------------------|---------------|
magic op keep hash128(name.ns_id) consensus hash
data?
```
Inputs:
* Owner `scriptSig`
* Payment `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* new name owner's `scriptPubkey`
* old name owner's `scriptPubkey`
* payment `scriptPubkey` change
Notes:
* The `keep data?` byte controls whether or not the name's 20-byte value is preserved (i.e. whether or not the name's associated zone file is preserved across the transfer).
This value is either `>` to preserve it, or `~` to delete it. If you're simply
re-keying, you should use `>`. You should only use `~` if you want to
simultaneously dissociate the name (and its DID) from its off-chain state, like
the DID's DDO.
### NAME_REVOKE
Op: `~`
Description: This transaction destroys a registered name. Its name state value
in BNS will be cleared, and no further transactions will be able to affect the
name until it expires (if its namespace allows it to expire at all). Once
confirmed, this transaction ensures that neither the name nor the DID will
resolve to a DDO.
Example: [eb2e84a45cf411e528185a98cd5fb45ed349843a83d39fd4dff2de47adad8c8f](https://www.blocktrail.com/BTC/tx/eb2e84a45cf411e528185a98cd5fb45ed349843a83d39fd4dff2de47adad8c8f)
`OP_RETURN` wire format:
```
0 2 3 39
|----|--|-----------------------------|
magic op name.ns_id (37 bytes)
```
Inputs:
* owner `scriptSig`
* payment `scriptSig`'s
Outputs:
* `OP_RETURN` payload
* owner `scriptPubkey`
* payment `scriptPubkey` change
## Method Glossary
Some hashing primitives are used to construct the wire-format representation of each name operation. They are enumerated here:
```
B40_REGEX = '^[a-z0-9\-_.+]*$'
def is_b40(s):
return isinstance(s, str) and re.match(B40_REGEX, s) is not None
def b40_to_bin(s):
if not is_b40(s):
raise ValueError('{} must only contain characters in the b40 char set'.format(s))
return unhexlify(charset_to_hex(s, B40_CHARS))
def hexpad(x):
return ('0' * (len(x) % 2)) + x
def charset_to_hex(s, original_charset):
return hexpad(change_charset(s, original_charset, B16_CHARS))
def bin_hash160(s, hex_format=False):
""" s is in hex or binary format
"""
if hex_format and is_hex(s):
s = unhexlify(s)
return hashlib.new('ripemd160', bin_sha256(s)).digest()
def hex_hash160(s, hex_format=False):
""" s is in hex or binary format
"""
if hex_format and is_hex(s):
s = unhexlify(s)
return hexlify(bin_hash160(s))
def hash_name(name, script_pubkey, register_addr=None):
"""
Generate the hash over a name and hex-string script pubkey.
Returns the hex-encoded string RIPEMD160(SHA256(x)), where
x is the byte string composed of the concatenation of the
binary
"""
bin_name = b40_to_bin(name)
name_and_pubkey = bin_name + unhexlify(script_pubkey)
if register_addr is not None:
name_and_pubkey += str(register_addr)
# make hex-encoded hash
return hex_hash160(name_and_pubkey)
def hash128(data):
"""
Hash a string of data by taking its 256-bit sha256 and truncating it to the
first 16 bytes
"""
return hexlify(bin_sha256(data)[0:16])
```

1
core/blockstack_naming_service.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

1
core/cli.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

148
core/faq_evaluators.md

@ -1,148 +0,0 @@
## What is the Blockstack ecosystem
In the Blockstack ecosystem, users control their data and apps run on their devices. There
are no middlemen, no passwords, no massive data silos to breach, and no services
tracking us around the internet.
The applications on blockstack are server-less and decentralized. Developers
start by building a single-page application in Javascript, Then, instead of
plugging the frontend into a centralized API, they plug into an API run by the
user. Developers install a library called `blockstack.js` and don't have to
worry about running servers, maintaining databases, or building out user
management systems.
Personal user APIs ship with the Blockstack app and handle everything from
identity and authentication to data storage. Applications can request
permissions from users and then gain read and write access to user resources.
Data storage is simple and reliable and uses existing cloud infrastructure.
Users connect with their Dropbox, Google Drive, S3, etc... and data is synced
from their local device up to their cloud.
Identity is user-controlled and utilizes the blockchain for secure management of
keys, devices and usernames. When users login with apps, they are anonymous by
default and use an app-specific key, but their full identity can be revealed and
proven at any time. Keys are for signing and encryption and can be changed as
devices need to be added or removed.
Under the hood, Blockstack provides a decentralized domain name system (DNS),
decentralized public key distribution system, and registry for apps and user
identities.
## What problems does Blockstack solve?
Developers can now build Web applications where:
- you own your data, not the application
- you control where your data is stored
- you control who can access your data
Developers can now build Web applications where:
- you don't have to deal with passwords
- you don't have to host everyone's data
- you don't have to run app-specific servers
Right now, Web application users are "digital serfs" and applications are the "digital landlords". Users don't own their data; the app owns it. Users don't control where data gets stored; they can only store it on the application. Users don't control access to it; they only advise the application on how to control access (which the application can ignore).
Blockstack applications solve both sets of problems. Users pick and choose highly-available storage providers like Dropbox or BitTorrent to host their data, and applications read it with the user's consent. Blockstack ensures that all data is signed and verified and (optionally) encrypted end-to-end, so users can treat storage providers like dumb hard drives: if you don't like yours, you can swap it out with a better one. Users can take their data with them if they leave the application, since it was never the application's in the first place.
At the same time, developers are no longer on the hook for hosting user data. Since users bring their own storage and use public-key cryptography for authentication, applications don't have to store anything--there's nothing to steal when they get hacked. Moreover, many Web applications today can be re-factored so that everything happens client-side, obviating the need for running dedicated application servers.
## What is a Blockstack ID?
Blockstack IDs are usernames. Unlike normal Web app usernames, Blockstack IDs
are usable *across every Blockstack app.* They fill a similar role to
centralized single-signon services like Facebook or Google. However, you and
only you control your Blockstack ID, and no one can track your logins.
## How do I get a Blockstack ID?
If you use the [Blockstack Browser]({{ site.baseurl }}/browser/browser-introduction.html) to create a
new ID.
## Why do I need a Blockstack ID?
Blockstack IDs are used to discover where you are keeping your
(publicly-readable) application data. For example, if `alice.id` wants to share
a document with `bob.id`, then `bob.id`'s browser uses the Blockstack ID
`alice.id` to look up where `alice.id` stored it.
The technical descriptions of how and why this works are quite long.
Please see the [Blockstack Naming Service]({{site.baseurl}}/core/naming/introduction.html)
documentation for a full description.
=
## What components make ups the Blockstack ecosystem?
The components that make up Blockstack do not have any central points of
control.
* The [Blockstack Naming Service]({{ site.baseurl }}/core/naming/introduction.html) runs on top of
the Bitcoin blockchain, which itself is decentralized. It binds Blockstack
IDs to a small amount of on-chain data (usually a hash of off-chain data).
* The [Atlas Peer Network]({{ site.baseurl }}/core/atlas/overview.html) stores chunks of data referenced by
names in BNS. It operates under similar design principles to BitTorrent, and
has no single points of failure. The network is self-healing---if a node
crashes, it quickly recovers all of its state from its peers.
* The [Gaia storage system](https://github.com/blockstack/gaia) lets users
choose where their application data gets hosted. Gaia reduces all storage
systems---from cloud storage to peer-to-peer networks---to dumb, interchangeable
hard drives. Users have maximum flexibility and control over their data in a
way that is transparent to app developers.
## Blockstack vs Ethereum
Blockstack and Ethereum both strive to provide a decentralized application
platform. Blockstack's design philosophy differs from Ethereum's design
philosophy in that Blockstack emphasizes treating the blockchain as a "dumb
ledger" with no special functionality or properties beyond a few bare minimum
requirements. Instead, it strives to do everything off-chain---an application of the [end-to-end principle](https://en.wikipedia.org/wiki/End-to-end_principle).
Most Blockstack applications do *not*
interact with the blockchain, and instead interact with Blockstack
infrastructure through client libraries and RESTful endpoints.
This is evidenced by Blockstack's decision to implement its naming system (BNS), discovery and routing system
(Atlas), and storage system (Gaia) as blockchain-agnostic components that can be
ported from one blockchain to another.
Ethereum takes the opposite approach. Ethereum dapps are expected to interface
directly with on-chain smart contract logic, and are expected to host a
non-trivial amount of state in the blockchain itself. This is necessary for
them, because many Ethereum dapps' business logic is centered around the
mechanics of an ERC20 token.
Blockstack does not implement a smart contract system (yet), but it will soon
implement a [native token](https://blockstack.com/distribution.pdf) that will be
accessible to Blockstack applications.
## What's the difference between Onename and Blockstack?
Onename is the free Blockstack ID registrar run by Blockstack. It makes it easy to register your name and setup your profile. Once the name has been registered in Onename you can transfer it to a wallet you control, or leave it there and use it as you like.
## How is Blockstack different from Namecoin?
Blockstack DNS differs from Namecoin DNS in a few fundamental ways: blockchain layering, storage models, name pricing models, and incentives for miners. We wrote a post where you can learn more here: https://blockstack.org/docs/blockstack-vs-namecoin
## I heard you guys were on Namecoin, what blockchain do you use now?
We use the Bitcoin blockchain for our source of truth.
## How long has the project been around?
Work on the project started in late 2013. First public commits on the code are
from Jan 2014. The first registrar for Blockstack was launched in March 2014 and
the project has been growing since then.
## Who started the project? Who maintains it?
The project was started by two engineers from Princeton University. Muneeb Ali
and Ryan Shea met at the Computer Science department at Princeton, where Muneeb
was finishing his PhD and Ryan was running the enterprenurship club. In 2014,
frustrated by the walled-gardens and security problems of the current internet
they started working on a decentralized internet secured by blockchains. A full
list of contributors can be found
[here](https://github.com/blockstack/blockstack-core/graphs/contributors).

1
core/gaia.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

1
core/glossary.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

3
core/interactive_regtest_macros.md

@ -1,3 +0,0 @@
Documentation for setting up the regtest mode for Blockstack Browser
using core's integration tests in macOS and Linux has
moved [here](../integration_tests).

1
core/namespace_creation.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

1
core/openbazaar.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

1
core/resolver.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

1
core/search.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.

67
core/setup_core_portal.md

@ -1,67 +0,0 @@
# About
This document is for **Linux users who do not want to use Docker** to run the
Blockstack Browser. Instructions are tailored for Ubuntu, but are similar on other distributions.
# Setting up Blockstack Browser Node Application
Install NodeJS through NodeSource PPA
```
curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
sudo apt install -y nodejs
```
Download Blockstack Browser and install its dependencies
```
git clone https://github.com/blockstack/blockstack-browser.git
cd blockstack-browser
npm install node-sass
npm install
```
Note that `blockstack-browser` depends on `node-sass` which can sometimes install strangely on Linux, running `npm install node-sass` before trying to install the other dependencies solves that problem.
# Running Blockstack Browser
Start the CORS proxy.
```
npm run dev-proxy &
```
Start the Node Application
```
npm run dev
```
Then you can open `http://localhost:3000/` in your browser to get to the Blockstack Browser.
## Setting up a protocol handler
If you'd like your browser to automatically handle links with the `blockstack:` protocol specifier, you will need to register a protocol handler with your desktop environment. In Ubuntu/Gnome, this can be done by creating a file
`~/.local/share/applications/blockstack.desktop`
With the following contents:
```
[Desktop Entry]
Type=Application
Terminal=false
Exec=bash -c 'xdg-open http://localhost:3000/auth?authRequest=$(echo "%u" | sed s,blockstack:////*,,)'
Name=Blockstack-Browser
MimeType=x-scheme-handler/blockstack;
```
Then you need to make this file executable, and register it as a protocol handler.
```
$ chmod +x ~/.local/share/applications/blockstack.desktop
$ xdg-mime default blockstack.desktop x-scheme-handler/blockstack
```
Now, `blockstack:` protocol URLs should get handled by your Blockstack Browser. If you're running Browser in your browser's private mode, you may have to copy and paste the link, as this protocol handler will try to open in a regular browser window.

1
core/subdomain.md

@ -1 +0,0 @@
The documentation has been moved to [docs.blockstack.org](https://docs.blockstack.org/), please update your bookmarks.
Loading…
Cancel
Save