Skip to content

Commit cfcedf3

Browse files
committed
Add first draft of readme
0 parents  commit cfcedf3

1 file changed

Lines changed: 63 additions & 0 deletions

File tree

README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# System76 Certification
2+
3+
A collection of automated and manual processes to ensure the delivery of a quality Ubuntu machine.
4+
5+
## Rationale
6+
7+
We would like to, for production and engineering reasons, have a set of automated and manual procedures that can be collected with as little user input as possible to identify product issues.
8+
9+
For support reasons, we would like to have a set of tests that support staff can use to identify potential issues with hardware in the field.
10+
11+
For marketing reasons, we would like to have more information to offer potential customers about the effectiveness of our machines.
12+
13+
## High Level Concepts
14+
15+
### Product Conception
16+
17+
When we are attempting to produce a new product, we need to evaluate what hardware is present and can be activated, using automated procedures. This information should be collected in a centralized repository.
18+
19+
### Product Development
20+
21+
To develop the product, we then need to run both automated and manual tests, iterating to produce a releasable product. This information should be tracked over time, which we already do with a manual process.
22+
23+
### Product Release
24+
25+
Upon product release, we should be able to publish the results of our test in a beautiful, powerful manner to compliment our other product materials. We should attempt to track user engagement with this information.
26+
27+
### Product Maintenance
28+
29+
When we handle support issues, we should refer to collected test information, and be able to reproduce test results on machines in the field, or in the RMA process.
30+
31+
## Low Level Implementation
32+
33+
### Automated Hardware Discovery
34+
35+
We should have a process of, given a new barebones machine, identifying present hardware and determining if there are test cases covering the usage of this hardware
36+
37+
### Standardized Tests
38+
39+
Each class of hardware should have a set of automated and manual tests and benchmarks that can be run in a scripted form. Preference would be to segregate manual and automated tests in order to allow quick runs of either.
40+
41+
### Test Suggestion
42+
43+
For discovered hardware that does not have tests or benchmarks, suggestions should be made and included in the results
44+
45+
### Automated Result Collection
46+
47+
Result collection should be automated, regardless of the automation of the test
48+
49+
### Pretty and "Ugly" interfaces to Results
50+
51+
We should have a manner of presenting test results to users in a highly aesthetic manner, in addition to providing more information for our purposes and those of power users.
52+
53+
### Comparison of Results Across Hardware
54+
55+
We should be able to compare results among different pieces of hardware, allowing us to display benchmark results to customers interested in examining two pieces of hardware.
56+
57+
### Tests Prior to Shipping
58+
59+
We should have sanity testing to identify lemons before shipping them.
60+
61+
### User-Generated Results
62+
63+
We should allow users to run this test tool and produce their own test results.

0 commit comments

Comments
 (0)