Skip to content

Commit 3a76f38

Browse files
jcavarhhrvoic
andauthored
Add support for waveform snapshots (#1)
* Add support for waveform snapshots This commit adds a support for waveform snapshots on AVAudioPCMBuffer. Additionally: - Add required resources for testing - Add tests for basic waveform types * Update README.md Co-authored-by: Hrvoje Hrvoić <hrvoje.hrvoic@infinum.hr> * Update Tests/AudioSnapshotTestingTests/AudioSnapshotTestingTests.swift Co-authored-by: Hrvoje Hrvoić <hrvoje.hrvoic@infinum.hr> * Update Package.swift Co-authored-by: Hrvoje Hrvoić <hrvoje.hrvoic@infinum.hr> * Update Package.swift Co-authored-by: Hrvoje Hrvoić <hrvoje.hrvoic@infinum.hr> * Extract padding and calculate height * Add fatal errors instead of force unwraps * Update README --------- Co-authored-by: Hrvoje Hrvoić <hrvoje.hrvoic@infinum.hr>
1 parent c85f795 commit 3a76f38

27 files changed

Lines changed: 372 additions & 0 deletions

Package.resolved

Lines changed: 24 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Package.swift

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
// swift-tools-version: 6.0
2+
// The swift-tools-version declares the minimum version of Swift required to build this package.
3+
4+
import PackageDescription
5+
6+
let package = Package(
7+
name: "AudioSnapshotTesting",
8+
platforms: [.iOS(.v15), .macOS(.v12), .tvOS(.v13), .watchOS(.v6)],
9+
products: [
10+
.library(
11+
name: "AudioSnapshotTesting",
12+
targets: ["AudioSnapshotTesting"]
13+
)
14+
],
15+
dependencies: [
16+
.package(url: "https://github.com/pointfreeco/swift-snapshot-testing", from: "1.17.6")
17+
],
18+
targets: [
19+
.target(
20+
name: "AudioSnapshotTesting",
21+
dependencies: [
22+
.product(name: "SnapshotTesting", package: "swift-snapshot-testing")
23+
]
24+
),
25+
.testTarget(
26+
name: "AudioSnapshotTestingTests",
27+
dependencies: ["AudioSnapshotTesting"],
28+
resources: [.process("Resources")]
29+
)
30+
]
31+
)

README.md

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# AudioSnapshotTesting
2+
3+
A Swift package for [SnapshotTesting](https://github.com/pointfreeco/swift-snapshot-testing) audio buffers in your iOS/macOS apps.
4+
5+
<!--
6+
This is the status area for the project.
7+
Add project badges (if needed) to this part of the file.
8+
-->
9+
10+
## Description
11+
12+
AudioSnapshotTesting provides snapshot strategies for testing audio-related functionality through visual snapshots. This makes it easier to verify audio processing and manipulation in a visual, deterministic way.
13+
14+
## Table of contents
15+
16+
* [Getting started](#getting-started)
17+
* [Usage](#usage)
18+
* [Contributing](#contributing)
19+
* [License](#license)
20+
* [Credits](#credits)
21+
22+
23+
## Getting started
24+
25+
Add AudioSnapshotTesting as a dependency in your `Package.swift` file:
26+
27+
```swift
28+
dependencies: [
29+
.package(url: "https://github.com/infinum/AudioSnapshotTesting.git", from: "0.1.0")
30+
]
31+
```
32+
33+
## Usage
34+
35+
### Basic Snapshot Test
36+
37+
```swift
38+
import AudioSnapshotTesting
39+
import XCTest
40+
41+
class MyAudioTests: XCTestCase {
42+
func testAudioProcessing() {
43+
let buffer = // your AVAudioPCMBuffer
44+
assertSnapshot(
45+
of: buffer,
46+
as: .waveform(width: 3000, height: 800)
47+
)
48+
}
49+
}
50+
```
51+
52+
### Features
53+
54+
- [x] `AVAudioPCMBuffer` waveform snapshots
55+
- [x] `AVAudioPCMBuffer` overlayed waveform snapshots
56+
- [ ] Spectrogram
57+
- [ ] Spectra
58+
- [ ] Test against other reference implementations and with known audio files
59+
- [ ] Documentation
60+
- [ ] Mention JUCE
61+
- [ ] Link blog post
62+
63+
## Contributing
64+
65+
We believe that the community can help us improve and build better a product.
66+
Please refer to our [contributing guide](CONTRIBUTING.md) to learn about the types of contributions we accept and the process for submitting them.
67+
68+
To ensure that our community remains respectful and professional, we defined a [code of conduct](CODE_OF_CONDUCT.md) <!-- and [coding standards](<link>) --> that we expect all contributors to follow.
69+
70+
We appreciate your interest and look forward to your contributions.
71+
72+
## License
73+
74+
```text
75+
Copyright 2024 Infinum
76+
77+
Licensed under the Apache License, Version 2.0 (the "License");
78+
you may not use this file except in compliance with the License.
79+
You may obtain a copy of the License at
80+
81+
http://www.apache.org/licenses/LICENSE-2.0
82+
83+
Unless required by applicable law or agreed to in writing, software
84+
distributed under the License is distributed on an "AS IS" BASIS,
85+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
86+
See the License for the specific language governing permissions and
87+
limitations under the License.
88+
```
89+
90+
## Credits
91+
92+
Maintained and sponsored by [Infinum](https://infinum.com).
93+
94+
<div align="center">
95+
<a href='https://infinum.com'>
96+
<picture>
97+
<source srcset="https://assets.infinum.com/brand/logo/static/white.svg" media="(prefers-color-scheme: dark)">
98+
<img src="https://assets.infinum.com/brand/logo/static/default.svg">
99+
</picture>
100+
</a>
101+
</div>
Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
import AVFAudio
2+
import Accelerate
3+
4+
extension AVAudioPCMBuffer {
5+
/// Reduce `AVAudioPCMBuffer` data in `bucketCount` pieces
6+
/// Buffer usually contains more data then required for display,
7+
/// this function reduces this by choosing absolute max sample value
8+
/// in a bucket of samples.
9+
/// Returns a pair of buckets and max bucket value
10+
func reduce(bucketCount: Int) -> ([Float], Float) {
11+
let frameCount = Int(self.frameLength)
12+
guard frameCount > 0 else { return ([], 0) }
13+
let mono = mixToMono()
14+
guard let mono = mono.floatChannelData else {
15+
fatalError("Not a float audio format")
16+
}
17+
let samples = Array(UnsafeBufferPointer(start: mono[0], count: frameCount))
18+
let samplesPerBucket = max(1, Double(frameCount) / Double(bucketCount))
19+
20+
var buckets = [Float](repeating: 0, count: bucketCount)
21+
var maxBucket: Float = 0
22+
for i in 0..<bucketCount {
23+
let bucketStart = Int(Double(i) * samplesPerBucket)
24+
let bucketEnd = min(bucketStart + Int(samplesPerBucket), frameCount)
25+
guard bucketStart < bucketEnd else { break }
26+
let bucketSamples = samples[bucketStart..<bucketEnd]
27+
let avgSample = bucketSamples.reduce(into: Float(0)) { currentMax, value in
28+
if abs(value) > abs(currentMax) {
29+
currentMax = value
30+
}
31+
}
32+
buckets[i] = avgSample
33+
if abs(avgSample) > maxBucket {
34+
maxBucket = abs(avgSample)
35+
}
36+
}
37+
return (buckets, maxBucket)
38+
}
39+
}
40+
41+
private extension AVAudioPCMBuffer {
42+
/// Mix `AVAudioPCMBuffer` to mono.
43+
/// For display, we typically only work
44+
/// with a single audio channel
45+
func mixToMono() -> AVAudioPCMBuffer {
46+
guard let newFormat = AVAudioFormat(standardFormatWithSampleRate: format.sampleRate, channels: 1) else {
47+
fatalError("Unsupported audio format \(format)")
48+
}
49+
guard let buffer = AVAudioPCMBuffer(pcmFormat: newFormat, frameCapacity: frameLength) else {
50+
fatalError("Unsupported audio format \(newFormat)")
51+
}
52+
buffer.frameLength = frameLength
53+
let stride = vDSP_Stride(1)
54+
guard let result = buffer.floatChannelData?[0] else {
55+
fatalError("Not a float audio format")
56+
}
57+
for channel in 0 ..< format.channelCount {
58+
guard let channelData = self.floatChannelData?[Int(channel)] else {
59+
fatalError("Not a float audio format")
60+
}
61+
vDSP_vadd(channelData, stride, result, stride, result, stride, vDSP_Length(frameLength))
62+
}
63+
return buffer
64+
}
65+
}
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
import AVFAudio
2+
import SwiftUI
3+
import Accelerate
4+
5+
@_exported import SnapshotTesting
6+
7+
#if os(macOS)
8+
public typealias PlatformImage = NSImage
9+
typealias PlatformView = NSView
10+
typealias PlatformHostingView = NSHostingView
11+
#elseif os(iOS)
12+
public typealias PlatformImage = UIImage
13+
typealias PlatformView = UIView
14+
typealias PlatformHostingView = _UIHostingView
15+
#endif
16+
17+
@MainActor
18+
public extension Snapshotting where Format == PlatformImage, Value == (AVAudioPCMBuffer, AVAudioPCMBuffer) {
19+
/// Generates a overlayed waveform snapshot of the given tuple of `AVAudioPCMBuffer`s.
20+
static func waveform(width: Int, height: Int) -> Snapshotting {
21+
Snapshotting<PlatformView, PlatformImage>.image(size: .init(width: width, height: height))
22+
.pullback { buffer1, buffer2 in
23+
let (buckets1, max1) = buffer1.reduce(bucketCount: width)
24+
let (buckets2, max2) = buffer2.reduce(bucketCount: width)
25+
let data1 = buckets1.enumerated().map(Bucket.init)
26+
let data2 = buckets2.enumerated().map(Bucket.init)
27+
let verticalPadding: CGFloat = 4
28+
let waveformHeight = CGFloat(height) - (verticalPadding * 2)
29+
let waveform1 = WaveformView(
30+
buckets: data1,
31+
absMax: max1,
32+
height: waveformHeight,
33+
color: .red
34+
)
35+
let waveform2 = WaveformView(
36+
buckets: data2,
37+
absMax: max2,
38+
height: waveformHeight,
39+
color: .green
40+
)
41+
let waveform = ZStack {
42+
waveform1
43+
waveform2
44+
}
45+
.padding(.vertical, verticalPadding)
46+
.background(Color.black)
47+
return PlatformHostingView(rootView: waveform.environment(\.colorScheme, .light))
48+
}
49+
}
50+
}
51+
52+
@MainActor
53+
public extension Snapshotting where Format == PlatformImage, Value == AVAudioPCMBuffer {
54+
/// Generates a waveform snapshot of the given `AVAudioPCMBuffer`.
55+
static func waveform(width: Int, height: Int) -> Snapshotting {
56+
Snapshotting<PlatformView, PlatformImage>.image(size: .init(width: width, height: height))
57+
.pullback { buffer in
58+
let verticalPadding: CGFloat = 4
59+
let waveformHeight = CGFloat(height) - (verticalPadding * 2)
60+
let (buckets, max) = buffer.reduce(bucketCount: width)
61+
let data = buckets.enumerated().map(Bucket.init)
62+
let waveform = WaveformView(
63+
buckets: data,
64+
absMax: max,
65+
height: waveformHeight,
66+
color: .red
67+
)
68+
.padding(.vertical, verticalPadding)
69+
.background(Color.black)
70+
return PlatformHostingView(rootView: waveform.environment(\.colorScheme, .light))
71+
}
72+
}
73+
}
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
struct Bucket: Identifiable {
2+
let index: Int
3+
let max: Float
4+
5+
var id: Int { index }
6+
}
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
import SwiftUI
2+
3+
struct WaveformView: View {
4+
private let color: Color
5+
private let path: Path
6+
7+
init(
8+
buckets: [Bucket],
9+
absMax: Float,
10+
height: CGFloat,
11+
color: Color
12+
) {
13+
self.color = color
14+
let halfHeight = height / 2
15+
path = Path { path in
16+
path.move(to: CGPoint(x: 0, y: halfHeight))
17+
for (index, bucket) in buckets.enumerated() {
18+
let sampleHeight = absMax > 0 ? (CGFloat(bucket.max) / CGFloat(absMax)) * halfHeight : 0
19+
let point = CGPoint(x: CGFloat(index), y: halfHeight - sampleHeight)
20+
path.addLine(to: point)
21+
}
22+
}
23+
}
24+
25+
var body: some View {
26+
Canvas(opaque: true) { context, size in
27+
context.stroke(path, with: .color(color))
28+
}
29+
}
30+
}
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
import Testing
2+
import AVFAudio
3+
@testable import AudioSnapshotTesting
4+
5+
@Test(
6+
.snapshots(record: false, diffTool: .ksdiff),
7+
arguments: ["sine", "triangle", "square", "sawtooth", "brown", "pink", "white"]
8+
)
9+
@MainActor
10+
func fileWaveform(wave: String) async throws {
11+
// TODO: account for retina
12+
13+
assertSnapshot(
14+
of: try AVAudioPCMBuffer.read(wave: wave),
15+
as: .waveform(width: 3000, height: 800),
16+
named: wave
17+
)
18+
}
19+
20+
@Test(.snapshots(record: false, diffTool: .ksdiff))
21+
@MainActor
22+
func fileWaveformOverlay() async throws {
23+
let buffer1 = try AVAudioPCMBuffer.read(wave: "sine")
24+
let buffer2 = try AVAudioPCMBuffer.read(wave: "square")
25+
26+
assertSnapshot(
27+
of: (buffer1, buffer2),
28+
as: .waveform(width: 4000, height: 1000),
29+
named: "square-over-sine"
30+
)
31+
}
32+
33+
private extension AVAudioPCMBuffer {
34+
static func read(wave: String) throws -> AVAudioPCMBuffer {
35+
let file = try AVAudioFile(
36+
forReading: Bundle.module.url(forResource: wave, withExtension: "wav")!
37+
)
38+
let buffer = try #require(AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)))
39+
try file.read(into: buffer)
40+
return buffer
41+
}
42+
}
64 KB
Binary file not shown.
64 KB
Binary file not shown.

0 commit comments

Comments
 (0)