Categories
iOS Swift

Initializing @MainActor type from a non-isolated context in Swift

Recently I was in the middle of working on code where I wanted a type to require @MainActor since the type was an ObservaleObject and makes sense if it always publishes changes on the MainActor. The MainActor type needed to be created by another type which is not a MainActor. How to do it?

This does not work by default, since we are creating the MainActor type from a non-isolated context.

final class ViewPresenter {
init(dataObserver: DataObserver) {
// Call to main actor-isolated initializer 'init(dataObserver:)' in a synchronous nonisolated context
self.viewState = ViewState(dataObserver: dataObserver)
}
let viewState: ViewState
}
@MainActor final class ViewState: ObservableObject {
let dataObserver: DataObserver
init(dataObserver: DataObserver) {
self.dataObserver = dataObserver
}
@Published private(set) var status: Status = .loading
@Published private(set) var contacts: [Contact] = []
}
view raw ViewState.swift hosted with ❤ by GitHub

OK, this does not work. But since the ViewState has a simple init then why not slap nonisolated on the init and therefore not requiring the init to be called on a MainActor. This leads to a warning: “Main actor-isolated property ‘dataObserver’ can not be mutated from a non-isolated context; this is an error in Swift 6”. After digging in Swift forums to understand the error, I learned that as soon as init assigns the dataObserver instance to the MainActor guarded property, then compiler considers that the type is owned by the MainActor now. Since init is nonisolated, compiler can’t ensure that the assigned instance is not mutated by the non-isolated context.

final class ViewPresenter {
init(dataObserver: DataObserver) {
self.viewState = ViewState(dataObserver: dataObserver)
}
let viewState: ViewState
}
@MainActor final class ViewState: ObservableObject {
let dataObserver: DataObserver
nonisolated init(dataObserver: DataObserver) {
// Main actor-isolated property 'dataObserver' can not be mutated
// from a non-isolated context; this is an error in Swift 6
self.dataObserver = dataObserver
}
@Published private(set) var status: Status = .loading
@Published private(set) var contacts: [Contact] = []
}
view raw ViewState.swift hosted with ❤ by GitHub

This warning can be fixed by making the DataObserver type to conform to Sendable protocol which tells the compiler that it is OK, if the instance is mutated from different contexts (of course we need to ensure that the type really is thread-safe before adding the conformance). In this particular case, making the type Sendable was not possible, and I really did not want to go to the land of @unchecked Sendable, so I continued my research. Moreover, having nonisolated init looked like something what does not look right anyway.

Finally, I realized that since the ViewState is @MainActor, then I could make the viewState property @MainActor as well and delay creating the instance until the property is accessed. Makes sense since if I want to access the ViewState and interact with it then I need to be on the MainActor anyway. If the property is lazy var and created using a closure, then we achieve what we want: force the instance creation to MainActor. Probably, code speaks itself more clearly.

final class ViewPresenter {
private let viewStateBuilder: @MainActor () -> ViewState
init(dataObserver: DataObserver) {
self.viewStateBuilder = { ViewState(dataObserver: dataObserver) }
}
@MainActor lazy var viewState: ViewState = viewStateBuilder()
}
@MainActor final class ViewState: ObservableObject {
let dataObserver: DataObserver
init(dataObserver: DataObserver) {
self.dataObserver = dataObserver
}
@Published private(set) var status: Status = .loading
@Published private(set) var contacts: [Contact] = []
}
view raw ViewState.swift hosted with ❤ by GitHub

What I like is that I can keep one of the types fully @MainActor and still manage the creation from a non-isolated context. The downside is having lazy var and handling the closure.

If you want to try my apps, then grab one of the free offer codes for Silky Brew.

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

Avoiding subtle mistake when guarding mutable state with DispatchQueue

Last week, I spent quite a bit of time on investigating an issue which sometimes happened, sometimes did not. There was quite a bit of code involved running on multiple threads, so tracking it down was not so simple. No surprise to find that this was a concurrency issue. The issue lied in the implementation of guarding a mutable state with DispatchQueue. The goal of the blog post is to remind us again a pattern which looks nice at first but actually can cause issues along the road.

Let’s have a look at an example where we have a Storage class which holds data in a dictionary where keys are IDs and values are Data instances. There are multiple ways for guarding the mutable state. In the example, we are using a concurrent DispatchQueue. Concurrent queues are not as optimized as serial queues, but the reasoning here is that we store large data blobs and concurrent reading gives us a slight benefit over serial reading. With concurrent queues we must make sure all the reading operations have finished before we mutate the shared state, and therefore we use the barrier flag which tells the queue to wait until all the enqueued tasks are finished.

final class Storage {
private let queue = DispatchQueue(label: "myexample", attributes: .concurrent)
private var _contents = [String: Data]()
private var contents: [String: Data] {
get {
queue.sync { _contents }
}
set {
queue.async(flags: .barrier) { self._contents = newValue }
}
}
func store(_ data: Data, forIdentifier id: String) {
contents[id] = data
}
// ā€¦
}
view raw Storage.swift hosted with ❤ by GitHub

The snippet above might look pretty nice at first, since all the logic around synchronization is in one place, and we can use the contents property in other functions without needing to think about using the queue. For validating that it works correctly, we can add a unit test.

func testThreadSafety() throws {
let iterations = 100
let storage = Storage()
DispatchQueue.concurrentPerform(iterations: iterations) { index in
storage.store(Data(), forIdentifier: "\(index)")
}
XCTAssertEqual(storage.numberOfItems, iterations)
}
view raw Test.swift hosted with ❤ by GitHub

The test fails because we actually have a problem in the Storage class. The problem is that contents[id] = data does two operations on the queue: firstly, reading the current state using the property getter and then setting the new modified dictionary with the setter. Let’s walk this through with an example where thread A calls the store function and tries to add a new key “d” and thread B calls the store function at the same time and tries to add a new key “e”. The flow might look something like this:

A calls the getter and gets an instance of the dictionary with keys “a, b, c”. Before the thread A calls the setter, thread B already had a chance to read the dictionary as well and gets the same keys “a, b, c”. Thread A reaches the point where it calls the setter and inserts modified dictionary with keys”a, b, c, d” and just after that the thread B does the same but tries to insert dictionary with keys “a, b, c, e”. When the queue ends processing all the work items, the key “d” is going to be lost, since the thread B managed to read the shared dictionary state before the thread A modified it. The morale of the story is that when modifying a shared state, we must make sure that reading the initial state and setting a new value must be synchronized and can’t happen as separate work items on the synchronizing queue. This happened here, since using the dictionaries subscript first runs the getter and then the setter.

The suggestion how to fix such issues is to use a single queue and making sure that read and write happen within the same work item.

func store(_ data: Data, forIdentifier id: String) {
// Incorrect because read and write happen in separate blocks on the queue
// contents[id] = data
// Correct
queue.async(flags: .barrier) {
self._contents[id] = data
}
}
view raw Fixed.swift hosted with ❤ by GitHub

An alternative approach to this Storage class’ implementation with new concurrency features in mind could be using the new actor type instead. But keep in mind that in that case we need to use await when accessing the storage since actors are part of the structured concurrency in Swift. Using the await keyword in turn requires having async context available, so it might not be straight-forward to adopt.

actor Storage {
private var contents = [String: Data]()
func store(_ data: Data, forIdentifier id: String) {
contents[id] = data
}
var numberOfItems: Int { contents.count }
}
// Example:
// await storage.store(data, forIdentifier: id)
view raw Actor.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift SwiftUI

AsyncPhoto with caching in SwiftUI (part 2)

In the part 1 of the series, AsyncPhoto for displaying large photos inĀ SwiftUI, we built a SwiftUI view which has a similar interface to Apple’s AsyncImage, but provides a way to use any kind of image data source. In the part 2 of the series, we’ll implement an in-memory cache for the AsyncPhoto. This is important for reducing any flickering caused by the nature of async image loading. An example to highlight where it comes useful is when we have a detail view which displays a thumbnail of a large photo. If we open the detail view multiple times for the same photo, we really do not want to see the loading spinner every single time. Another benefit is that we do not need to load a huge photo in memory and then spending CPU on scaling it down.

OK, let’s jump into it.

The aim of the cache is to cache the scaled down images. We never want to cache the original image data since it would make the memory usage to through the roof, and we would still need to use CPU to scale down the image. Before we start, we need to remember that in part 1 we designed the AsyncPhoto in a way where it has an ID, scaledSize properties and a closure for returning image data asynchronously. Therefore, the caching key needs to be created by using the ID and the scaled size, since we might want to display a photo in multiple AsyncPhoto instances with different sizes. Let’s create an interface for the caching layer. We’ll go for a protocol based approach, which allows replacing the caching logic with different concrete implementations. In this blog post we’ll go for a NSCache backed caching implementation, but anyone else could use other approaches as well, like LRUCache.

/// An interface for caching images by identifier and size.
protocol AsyncPhotoCaching {
/// Store the specified image by size and identifier.
/// – Parameters:
/// – image: The image to be cached.
/// – id: The unique identifier of the image.
func store(_ image: UIImage, forID id: any Hashable)
/// Returns the image associated with a given id and size.
/// – Parameters:
/// – id: The unique identifier of the image.
/// – size: The size of the image stored in the cache.
/// – Returns: The image associated with id and size, or nil if no image is associated with id and size.
func image(for id: any Hashable, size: CGSize) -> UIImage?
/// Returns the caching key by combining a given image id and a size.
/// – Parameters:
/// – id: The unique identifier of the image.
/// – size: The size of the image stored in the cache.
/// – Returns: The caching key by combining a given id and size.
func cacheKey(for id: any Hashable, size: CGSize) -> String
}
extension AsyncPhotoCaching {
func cacheKey(for id: any Hashable, size: CGSize) -> String {
"\(id.hashValue):w\(Int(size.width))h\(Int(size.height))"
}
}

The protocol only defines 3 functions for writing, reading, and creating a caching key. We’ll provide a default implementation for the cacheKey(for:size:) function. Since the same image data should be cached by size, the cache key combines id and size arguments. Since we are dealing with floats in a string, we’ll round the width and height.

The next step is to create a concrete implementation. In this blog post, we’ll go for NSCache which automatically evicts images from the cache in case of a memory pressure. The downside of a NSCache is that the logic in which order images are evicted is not defined. The implementation is straight-forward.

struct AsyncPhotoCache: AsyncPhotoCaching {
private var storage: NSCache<NSString, UIImage>
static let shared = AsyncPhotoCache(countLimit: 10)
init(countLimit: Int) {
self.storage = NSCache()
self.storage.countLimit = countLimit
}
func store(_ image: UIImage, forID id: any Hashable) {
let key = cacheKey(for: id, size: image.size)
storage.setObject(image, forKey: key as NSString)
}
func image(for id: any Hashable, size: CGSize) -> UIImage? {
let key = cacheKey(for: id, size: size)
return storage.object(forKey: key as NSString)
}
}

We also added a shared instance since we want to use a single cache instance for all the AsyncPhoto instances. Let’s see how the AsyncPhoto implementation changes when we add a caching layer. The answer is, not so much.

struct AsyncPhoto<ID, Content, Progress, Placeholder>: View where ID: Hashable, Content: View, Progress: View, Placeholder: View {
// redacted
init(id value: ID = "",
scaledSize: CGSize,
cache: AsyncPhotoCaching = AsyncPhotoCache.shared,
data: @escaping (ID) async -> Data?,
content: @escaping (Image) -> Content = { $0 },
progress: @escaping () -> Progress = { ProgressView() },
placeholder: @escaping () -> Placeholder = { Color(white: 0.839) }) {
// redacted
}
var body: some View {
// redacted
}
@MainActor func load() async {
// Here we access the cache
if let image = cache.image(for: id, size: scaledSize) {
phase = .success(Image(uiImage: image))
}
else {
phase = .loading
if let image = await prepareScaledImage(for: id) {
guard !Task.isCancelled else { return }
phase = .success(image)
}
else {
guard !Task.isCancelled else { return }
phase = .placeholder
}
}
}
private func prepareScaledImage(for id: ID) async -> Image? {
guard let photoData = await data(id) else { return nil }
guard let originalImage = UIImage(data: photoData) else { return nil }
let scaledImage = await originalImage.scaled(toFill: scaledSize)
guard let finalImage = await scaledImage.byPreparingForDisplay() else { return nil }
// Here we store the scaled down image in the cache
cache.store(finalImage, forID: id)
return Image(uiImage: finalImage)
}
}

We added a new cache argument but also set the default value to the shared instance. The load() function tries to read a cached image as a first step, and the preparedScaledImage(for:) updates the cache. We rely on the cache implementation to keep the cache size small, therefore here is no code for manually evicting images from the cache when the ID changes. The main reason is that the AsyncPhoto instance does not have enough context for deciding this. For example, there might be other instances showing the photo for the old ID or maybe a moment later we want to display the photo for the old ID.

To recap, what we did. We defined an interface for caching images, created a NSCache based in-memory cache and hooked it up to the AsyncPhoto. We did all of this in a way that we did not need to change any existing code using AsyncPhoto instances.

There were some other tiny improvements, like using Task.isCancelled() to more quickly react to the ID change, setting the default placeholder colour to a light gray, and providing a default implementation for the content closure. Please check the example project for the full implementation. Here is the example project which reloads an avatar and as we can see at first, spinner is shown, but when images are cached, the change is immediate.

SwiftUIAsyncPhotoExample2 (GitHub, Xcode 15.1)

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
Swift

AsyncPhoto for displaying large photos in SwiftUI

While working on one of my private projects which deals with showing large photos as small thumbnails in a list, I found myself needing something like AsyncImage but for any kind of data sources. AsyncImage looks pretty great, but sad it is limited to loading images from URL. It has building blocks like providing placeholder and progress views. In my case, I needed something where instead of the URL argument, I would have an async closure which returns image data. This would give me enough flexibility for different cases like loading a photo from a file or even doing what AsyncImage is doing, loading image data from a server. I would love to know why Apple decided to go for a narrow use-case of loading images from URL but not for more generic approach. In addition, I would like to pre-define the target image size which allows me to scale the image to smaller size and therefore saving memory usage which would increase a lot when dealing with large photos. Enough talk, let’s jump in.

struct AsyncPhoto<ID, Content, Progress, Placeholder>: View where ID: Equatable, Content: View, Progress: View, Placeholder: View {
@State private var phase: Phase = .loading
let id: ID
let data: (ID) async -> Data?
let scaledSize: CGSize
@ViewBuilder let content: (Image) -> Content
@ViewBuilder let placeholder: () -> Placeholder
@ViewBuilder let progress: () -> Progress
init(id value: ID = "",
scaledSize: CGSize,
data: @escaping (ID) async -> Data?,
content: @escaping (Image) -> Content,
progress: @escaping () -> Progress = { ProgressView() },
placeholder: @escaping () -> Placeholder = { Color.secondary }) {
// ā€¦
}

The AsyncPhoto type is a generic over 4 types: ID, Content, Progress, Placeholder. Last three are SwiftUI views and the ID is equatable. This allows us for notifying the AsyncPhoto when to reload the photo by calling the data closure. Basically the same way as the task(id:priority:_:) is working – if the id changes, work item is run again. Since we expect to deal with large photos, we want to scale images before displaying them. Since the idea is that the view does not change the size while it is loading, or displaying a placeholder, we’ll require to pre-define the scaled size. Scaled size is used for creating a thumbnail image and also setting the AsyncPhoto’s frame view modifier to equal to that size. We use a data closure here for giving a full flexibility on how to provide the large image data.

AsyncImage has a separate type AsyncImagePhase for defining different states of the loading process. Since we need to do the same then, let’s add AsyncPhoto.Phase.

extension AsyncPhoto {
enum Phase {
case success(Image)
case loading
case placeholder
}
}

This allows us to use a switch statement in the view body and defining a local state for keeping track of in which phase we currently are. The view body implementation is pretty simple since we use view builders for content, progress and placeholder states. Since we want to have a constant size here, we use the frame modifier and the task view modifier is the one managing scheduling the reload when id changes.

var body: some View {
VStack {
switch phase {
case .success(let image):
content(image)
case .loading:
progress()
case .placeholder:
placeholder()
}
}
.frame(width: scaledSize.width, height: scaledSize.height)
.task(id: id, {
await self.load()
})
}

The load function is updating the phase state and triggering the heavy load of scaling the image.

@MainActor func load() async {
phase = .loading
if let image = await prepareScaledImage() {
phase = .success(image)
}
else {
phase = .placeholder
}
}

The prepareScaledImage is another function which wraps the work of fetching the image data and scaling it.

private func prepareScaledImage() async -> Image? {
guard let photoData = await data(id) else { return nil }
guard let originalImage = UIImage(data: photoData) else { return nil }
let scaledImage = await originalImage.scaled(toFill: scaledSize)
guard let finalImage = await scaledImage.byPreparingForDisplay() else { return nil }
return Image(uiImage: finalImage)
}

I am using an UIImage extension for scaling the image data. The implementation goes like this:

extension UIImage {
func scaled(toFill targetSize: CGSize) async -> UIImage {
let scaler = UIGraphicsImageRenderer(size: targetSize)
let finalImage = scaler.image { context in
let drawRect = size.drawRect(toFill: targetSize)
draw(in: drawRect)
}
return await finalImage.byPreparingForDisplay() ?? finalImage
}
}
private extension CGSize {
func drawRect(toFill targetSize: CGSize) -> CGRect {
let aspectWidth = targetSize.width / width
let aspectHeight = targetSize.height / height
let scale = max(aspectWidth, aspectHeight)
let drawRect = CGRect(x: (targetSize.width – width * scale) / 2.0,
y: (targetSize.height – height * scale) / 2.0,
width: width * scale,
height: height * scale)
return drawRect.integral
}
}

Here is an example of using AsyncPhoto from my test app, where I replaced photos with generated image data.

// Example of returning large image with a constant color for simulating loading a photo.
AsyncPhoto(id: selectedColor,
scaledSize: CGSize(width: 48, height: 48),
data: { selectedColor in
guard let selectedColor else { return nil }
return await Task.detached {
UIImage.filled(size: CGSize(width: 5000, height: 5000),
fillColor: selectedColor).pngData()
}.value
},
content: { image in
image.clipShape(Circle())
},
placeholder: {
Image(systemName: "person.crop.circle")
.resizable()
})

SwiftUIAsyncPhotoExample (GitHub, Xcode 15.0.1)

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
Generics iOS Swift SwiftUI

Loading async data for SwiftUI views

Sometimes we need to invoke an async function for fetching data before presenting a SwiftUI view. Therefore, a common flow is showing a spinner while the data is being fetched and then showing the main view. Moreover, if an error occurs, we show a failure view with a retry button. Let’s dive in how to build such view in a generic way.

As said before, our container view, let’s call it ContentPrepareView (similar naming to Apple’s ContentUnavailableView), has three distinct states: loading, failure, and success (named as “content” in the enum).

extension ContentPrepareView {
enum ViewContent {
case loading
case content
case failure(Error)
}
}

We’ll go for a fully generic implementation where each of the view state corresponds to a view builder. This gives as flexibility if in some places we want to use custom loading views or different failure view. But on the other hand, most of the time we just want to use a common loading and failure views, that is why we set default values for loading and failure view builders (see below). In addition to view builders, we need an async throwing task closure which handles the data fetching/preparation. If we put it all together, then the ContentPrepareView becomes this:

struct ContentPrepareView<Content, Failure, Loading>: View where Content: View, Failure: View, Loading: View {
@State private var viewContent: ViewContent = .loading
@ViewBuilder let content: () -> Content
@ViewBuilder let failure: (Error, @escaping () async -> Void) -> Failure
@ViewBuilder let loading: () -> Loading
let task: () async throws -> Void
init(content: @escaping () -> Content,
failure: @escaping (Error, @escaping () async -> Void) -> Failure = { FailureView(error: $0, retryTask: $1) },
loading: @escaping () -> Loading = { ProgressView() },
task: @escaping () async throws -> Void) {
self.content = content
self.failure = failure
self.loading = loading
self.task = task
}
var body: some View {
Group {
switch viewContent {
case .content:
content()
case .failure(let error):
failure(error, loadTask)
case .loading:
loading()
}
}
.onLoad(perform: loadTask)
}
// redacted
}

Since loading, failure and success views can be any kind of views, then our view needs to be a generic view. The body of the view has a switch-case for creating a view for the current view state. One thing to note here is that the onLoad view modifier is a custom one, and the idea is that it makes sure that the content preparation work only runs once per view life-time (onAppear() or task() can run multiple times). The reasoning is that we want to have an experience where we show the loading spinner only when the view is presented the first time, not when it appears again. The loadTask function is async and has responsibility of running the passed in async task closure and updating the current view state.

struct ContentPrepareView<Content, Failure, Loading>: View where Content: View, Failure: View, Loading: View {
// redacted
@MainActor func loadTask() async {
do {
viewContent = .loading
try await task()
viewContent = .content
}
catch {
viewContent = .failure(error)
}
}
}
view raw LoadTask.swift hosted with ❤ by GitHub

In this example we used a custom FailureView and it is a small view wrapping Apple’s ContentUnavailableView. It sets a label, description and handles the creation of the retry button.

struct FailureView: View {
let error: Error
let retryTask: () async -> Void
var body: some View {
ContentUnavailableView(label: {
Label("Failed to load", systemImage: "exclamationmark.circle.fill")
}, description: {
Text(error.localizedDescription)
}, actions: {
Button(action: {
Task { await retryTask() }
}, label: {
Text("Retry")
})
})
}
}

Here is an example how to use the final ContentPrepareView. For demo purposes, it fails the first load and allows succeeding the second.

struct ContentView: View {
// Demo: first load leads to an error
@State private var showsError = true
var body: some View {
ContentPrepareView {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
} task: {
try await Task.sleep(nanoseconds: 3_000_000_000)
// Demo: Retrying a task leads to success
guard showsError else { return }
showsError = false
throw LoadingError.example
}
}
}
enum LoadingError: LocalizedError {
case example
var errorDescription: String? {
"The connection to Internet is unavailable"
}
}
view raw Usage.swift hosted with ❤ by GitHub

ContentPrepareViewExample (GitHub, Xcode 15.0.1)

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

Async-await and completion handler compatibility in Swift

The prominent way for writing async code before async-await arrived to Swift was using completion handlers. We pass in a completion handler which then gets called at some later time. When working with larger codebases, it is not straight-forward to convert existing code to use newer techniques like async-await. Often we make these changes over time, which means that in case of wrapping completion handler based code, we would have the same function both in form of completion handler and async await. Fortunately, it is easy to wrap existing completion handler based code and to provide an async-await version. The withCheckedThrowingContinuation() function is exactly for that use-case. It provides an object which will receive the output of our completion handler based code – most of the time a value or an error. If we use Result type in completion handlers, then it is only 3 lines of code to wrap it, thanks to the fact that the continuation has a dedicated function for resuming result types.

final class ImageFetcher {
func fetchImages(for identifiers: [String], completionHandler: @escaping (Result<[String: UIImage], Error>) -> Void) {
// ā€¦
}
}
extension ImageFetcher {
func fetchImages(for identifiers: [String]) async throws -> [String: UIImage] {
try await withCheckedThrowingContinuation { continuation in
fetchImages(for: identifiers) { result in
continuation.resume(with: result)
}
}
}
}

Great, but what if we add new code to an existing code base relying heavily on completion handler based code? Can we start with an async function and wrap that as well? Sure. In the example below, we have some sort of DataFetcher which has an async function. If we needed to call this function from a completion handler based code, we can add a wrapping function pretty easily. Later, if we have fully converted to async-await, it can be discarded easily. So how do we do it? We start off the wrapping code by creating a Task which starts running automatically and which also provides an async context for calling async functions. This means that we can call the async function with try await and catching the error if it throws. Then it is just a matter of calling the completion handler. Depends on the use-case and how this code is meant to be used, but we should always think about which thread should be calling the completion handler. In the example, we always switch to the main thread because the Task’s closure is running on a global actor (in other words, on a background thread).

final class DataFetcher {
func fetchData(for identifiers: [String]) async throws -> [String: Data] {
// ā€¦
}
}
extension DataFetcher {
func fetchData(for identifiers: [String], completionHandler: @escaping (Result<[String: Data], Error>) -> Void) {
Task {
do {
let data = try await fetchData(for: identifiers)
await MainActor.run {
completionHandler(.success(data))
}
}
catch {
await MainActor.run {
completionHandler(.failure(error))
}
}
}
}
}

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift

TaskGroup error handling in Swift

Task groups in Swift are used for running n-number of child tasks and letting it handle things like cancellation or different priorities. Task groups are created with either withThrowingTaskGroup(of:returning:body:) or withTaskGroup(of:returning:body:). The latter is for cases when errors are not thrown. In this blog post, we will observe two cases of generating Data objects using a task group. In the first case, we want to stop the group as soon as an error has occurred and discard all the remaining work. The other case looks at ignoring any errors in child tasks and collect just collecting Data objects for tasks which were successful.

The example we are going to use simulates creating image data for multiple identifiers and then returning an array of Data objects. The actual image creating and processing is simulated with Task’s sleep function. Since task groups coordinate cancellation to all the child tasks, then the processor implementation also calls Task.checkCancellation() to react to cancellation and stopping as soon as possible for avoiding unnecessary work.

struct ImageProcessor {
static func process(identifier: Int) async throws -> Data {
// Read image data
try await Task.sleep(nanoseconds: UInt64(identifier) * UInt64(1e8))
try Task.checkCancellation()
// Simulate processing the data and transforming it
try await Task.sleep(nanoseconds: UInt64(1e8))
try Task.checkCancellation()
if identifier != 2 {
print("Success: \(identifier)")
return Data()
}
else {
print("Failing: \(identifier)")
throw ProcessingError.invalidData
}
}
enum ProcessingError: Error {
case invalidData
}
}

Now we have the processor created. Let’s see an example of calling this function from a task group. As soon as we detect an error in one of the child tasks, we would like to stop processing and return an error from the task group.

let imageDatas = try await withThrowingTaskGroup(of: Data.self, returning: [Data].self) { group in
imageIdentifiers.forEach { imageIdentifier in
group.addTask {
return try await ImageProcessor.process(identifier: imageIdentifier)
}
}
var results = [Data]()
for try await imageData in group {
results.append(imageData)
}
return results
}
view raw Case1.swift hosted with ❤ by GitHub

We loop over the imageIdentifiers array and create a child task for each of these. When child tasks are created and running, we wait for child tasks to finish by looping over the group and waiting each of the child task. If the child task throws an error, then in the for loop we re-throw the error which makes the task group to cancel all the remaining child tasks and then return the error to the caller. Since we loop over each of the task and wait until it finishes, then the group will throw an error of the first added failing task. Also, just to remind that cancellation needs to be handled explicitly by the child task’s implementation by calling Task.checkCancellation().

Great, but what if we would like to ignore errors in child tasks and just collect Data objects of all the successful tasks. This could be implemented with withTaskGroup function by specifying the child task’s return type optional and handling the error within the child task’s closure. If error is thrown, return nil, and later when looping over child tasks, ignore nil values with AsyncSequence’s compactMap().

let imageDatas = await withTaskGroup(of: Data?.self, returning: [Data].self) { group in
imageIdentifiers.forEach { imageIdentifier in
group.addTask {
do {
return try await ImageProcessor.process(identifier: imageIdentifier)
} catch {
return nil
}
}
}
var results = [Data]()
for await imageData in group.compactMap({ $0 }) {
results.append(imageData)
}
return results
}
view raw Case2.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
Combine iOS Swift

Async-await support for Combine’s sink and map

Async-await in Swift is getting more popular as time goes by, but Combine publishers do not have built-in support for it currently. In this blog post, we’ll see how to expand some of the existing publishers.

Async-await supported sink

One case where I have encountered this is when I have wanted to call an async function in sink. Although I could wrap the call with Task within the sink subscriber, it gets unnecessary long if I need to do it in many places. Instead, we can just do it once and add an async-await supported sink subscriber.

extension Publisher where Self.Failure == Never {
func sink(receiveValue: @escaping ((Self.Output) async -> Void)) -> AnyCancellable {
sink { value in
Task {
await receiveValue(value)
}
}
}
}
// Allows writing sink without Task
$imageURL
.compactMap({ $0 })
.sink { [weak self] url in
await self?.processImageURL(url)
}
.store(in: &cancellables)
view raw ViewModel.swift hosted with ❤ by GitHub

Async-await supported map

The Combine framework has map and tryMap for supporting throwing functions, but is lacking something like tryAwaitMap for async throwing functions. Combine has a publisher named Future which supports performing asynchronous work and publishing a value. We can use this to wrap a Task with asynchronous work. Another publisher in Combine is flatMap what is used for turning one kind of publisher to a new kind of publisher. Therefore, we can combine these to turn a downstream publisher to a new publisher of type Future. The first tryAwaitMap below is for a case where the downstream publisher emits errors, and the second one is for the case where the downstream does not emit errors. We need to handle these separately since we need to tell Combine how error types are handled (non-throwing publisher has failure type set to Never).

extension Publisher {
public func tryAwaitMap<T>(_ transform: @escaping (Self.Output) async throws -> T) -> Publishers.FlatMap<Future<T, Error>, Self> {
flatMap { value in
Future { promise in
Task {
do {
let result = try await transform(value)
promise(.success(result))
}
catch {
promise(.failure(error))
}
}
}
}
}
public func tryAwaitMap<T>(_ transform: @escaping (Self.Output) async throws -> T) -> Publishers.FlatMap<Future<T, Error>, Publishers.SetFailureType<Self, Error>> {
// The same implementation but the returned publisher transforms failures with SetFailureType.
}
}
// Case 1: throwing downstream publisher
$imageURL
.tryMap({ try Self.validateURL($0) })
.tryAwaitMap({ try await ImageProcessor.process($0) })
.map({ Image(uiImage: $0) })
.sink(receiveCompletion: { print("completion: \($0)") },
receiveValue: { print($0) })
.store(in: &cancellables)
// Case 2: non-throwing downstream publisher
$imageURL
.compactMap({ $0 })
.tryAwaitMap({ try await ImageProcessor.process($0) })
.map({ Image(uiImage: $0) })
.sink(receiveCompletion: { print("completion: \($0)") },
receiveValue: { print($0) })
.store(in: &cancellables)
view raw ViewModel.swift hosted with ❤ by GitHub

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift Swift Package

Handling never finishing async functions in Swift package tests

Why does my CI never finish and post a message to the merge request? Logged in to CI and oh, my merge job had been running for 23 minutes already, although typically it finishes in 4 minutes. What was going on? Nothing else than on unit-test marked with async was still waiting for an async function to finish. So what can we to avoid this? Let’s first create a Swift package which will be demonstrating the issue.

struct ImageLoader {
func loadImage(for identifier: String) async throws -> UIImage {
// Delay for 100 seconds
try await Task.sleep(nanoseconds: UInt64(100 * 1e9))
return UIImage()
}
}

And a simple unit-test for the successful case.

final class ImageLoaderTests: XCTestCase {
func testLoadingImageSuccessfully() async throws {
let imageLoader = ImageLoader()
_ = try await imageLoader.loadImage(for: "identifier")
}
}

This test passes after 100 seconds, but clearly, we do not want to wait so long if something takes way too much time. Instead, we want to fail the test when it is still running after 5 seconds.

Exploring XCTestCase executionTimeAllowance

XCTestCase has a property called executionTimeAllowance what we can set. Ideally I would like to write something like executionTimeAllowance = 5 and Xcode would fail the test with a timeout failure after 5 seconds.

override func setUpWithError() throws {
executionTimeAllowance = 5 // gets rounded up to 60
}

But if we read the documentation, then it mentions that the value set to this property is rounded up to the nearest minute value. In addition, this value is not used if you do not enable it explicitly: “To use this setting, enable timeouts in your test plan or set the -test-timeouts-enabled option to YES when using xcodebuild.”. If we are working on a Swift package, then I am actually not sure how to set it in the Package.swift so that it gets set when running the test from Xcode or from a command line.

Custom test execution with XCTestExpectation

One way to avoid never finishing tests is to use good old XCTestExpectation. We can set up a method which runs the async work and then waits for the test expectation with a timeout. If a timeout occurs, the test fails. If the async function throws an error, we can capture it, fail the test with XCTFail.

final class ImageLoaderTests: XCTestCase {
func testLoadingImageSuccessfully() {
execute(withTimeout: 5) {
let imageLoader = ImageLoader()
_ = try await imageLoader.loadImage(for: "identifier")
}
}
}
extension XCTestCase {
func execute(withTimeout timeout: TimeInterval, file: StaticString = #filePath, line: UInt = #line, workItem: @escaping () async throws -> Void) {
let expectation = expectation(description: "wait for async function")
var workItemError: Error?
let captureError = { workItemError = $0 }
let task = Task {
do {
try await workItem()
}
catch {
captureError(error)
}
expectation.fulfill()
}
waitForExpectations(timeout: timeout) { _ in
if let error = workItemError {
XCTFail("\(error)", file: file, line: line)
}
task.cancel()
}
}
}

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.

Categories
iOS Swift SwiftUI

Wrapping delegates for @MainActor consumers in Swift

Sometimes we need to handle delegates in a class which has the @MainActor annotation. Often it can be a view model where we expect that code runs on the main thread. Therefore, view models have the @MainActor annotation, since we want that their methods run on the main thread when interacting with other async code. In an example below, we’ll be looking into integrating a delegate based ImageBatchLoader class which calls delegate methods on a background thread. The end goal is to handle the delegate in a view model and making sure it runs on the main thread.

final class ImageBatchLoader {
weak var delegate: ImageBatchLoaderDelegate?
init(delegate: ImageBatchLoaderDelegate) {
self.delegate = delegate
}
func start() {
DispatchQueue.global().async {
self.delegate?.imageLoader(self, didLoadBatch: [UIImage()])
}
}
}
protocol ImageBatchLoaderDelegate: AnyObject {
func imageLoader(_ imageLoader: ImageBatchLoader, didLoadBatch batch: [UIImage])
}
An example ImageBatchLoader with stubbed out start method.

This is an example of a class which uses delegates and calls delegate methods from background threads. If we have a view model with @MainActor annotation, then we just can’t conform to that delegate since the delegate does not use any async-await support. Xcode would show a warning saying that the protocol is non-isolated. A protocol would be isolated if it would have, for example, @MainActor annotation as well for that protocol. Let’s say this is not possible and it is a third party code instead.

The solution I have personally settled with is creating a wrapper class which conforms to that delegate and then uses main thread bound closures to notify when any of the delegate callbacks happen.

final class ImageBatchLoaderHandler: ImageBatchLoaderDelegate {
var didLoadBatch: @MainActor ([UIImage]) -> Void = { _ in }
func imageLoader(_ imageLoader: ImageBatchLoader, didLoadBatch batch: [UIImage]) {
print("isMainThread", Thread.isMainThread, #function)
Task {
await didLoadBatch(batch)
}
}
}

Here we can see a class which conforms to the ImageBatchLoaderDelegate and provides a didLoadBatch closure which has an @MainActor annotation. Since we use @MainActor and tap into the async-await concurrency, then we need an async context as well, which the Task provides.

@MainActor final class ViewModel: ObservableObject {
private let imageLoader: ImageBatchLoader
private let imageLoaderHandler: ImageBatchLoaderHandler
init() {
imageLoaderHandler = ImageBatchLoaderHandler()
imageLoader = ImageBatchLoader(delegate: imageLoaderHandler)
imageLoaderHandler.didLoadBatch = handleBatch
imageLoader.start()
}
func handleBatch(_ batch: [UIImage]) {
print("isMainThread", Thread.isMainThread, #function)
// redacted
}
}
view raw ViewModel.swift hosted with ❤ by GitHub

Finally we have hooked up the image loader, its handler and also forwarding the didLoadBatch to a separate function which is part of the view model. With a little bit of code, we achieved what we wanted: listening to delegate callbacks and forwarding them to the view model on the main thread. If we ran the code we would see that the delegate callback runs on a background thread but the view model method runs on the main thread.

isMainThread false imageLoader(_:didLoadBatch:)
isMainThread true handleBatch(_:)

If this was helpful, please let me know on Mastodon@toomasvahter orĀ Twitter @toomasvahter. Feel free to subscribe to RSS feed. Thank you for reading.