Jan 16 23:58:48.885742 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 23:58:48.885765 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:58:48.885775 kernel: KASLR enabled Jan 16 23:58:48.885781 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 23:58:48.885786 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 16 23:58:48.885792 kernel: random: crng init done Jan 16 23:58:48.885799 kernel: ACPI: Early table checksum verification disabled Jan 16 23:58:48.885805 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 23:58:48.885811 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 23:58:48.885819 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.885825 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.885831 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.885837 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.885843 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.888882 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.888918 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.888925 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.888933 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 23:58:48.888939 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 23:58:48.888946 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 23:58:48.888953 kernel: NUMA: Failed to initialise from firmware Jan 16 23:58:48.888959 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:58:48.888966 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 16 23:58:48.888972 kernel: Zone ranges: Jan 16 23:58:48.888979 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:58:48.888987 kernel: DMA32 empty Jan 16 23:58:48.888994 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 23:58:48.889000 kernel: Movable zone start for each node Jan 16 23:58:48.889006 kernel: Early memory node ranges Jan 16 23:58:48.889013 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 16 23:58:48.889019 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 23:58:48.889026 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 23:58:48.889032 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 23:58:48.889039 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 23:58:48.889045 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 23:58:48.889051 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 23:58:48.889058 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 23:58:48.889066 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 23:58:48.889072 kernel: psci: probing for conduit method from ACPI. Jan 16 23:58:48.889078 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 23:58:48.889087 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:58:48.889094 kernel: psci: Trusted OS migration not required Jan 16 23:58:48.889101 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:58:48.889109 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 23:58:48.889117 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:58:48.889123 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:58:48.889130 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:58:48.889137 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:58:48.889144 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:58:48.889151 kernel: CPU features: detected: Hardware dirty bit management Jan 16 23:58:48.889158 kernel: CPU features: detected: Spectre-v4 Jan 16 23:58:48.889164 kernel: CPU features: detected: Spectre-BHB Jan 16 23:58:48.889171 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 23:58:48.889179 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 23:58:48.889186 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 23:58:48.889193 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 23:58:48.889200 kernel: alternatives: applying boot alternatives Jan 16 23:58:48.889209 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:58:48.889216 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:58:48.889223 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:58:48.889230 kernel: Fallback order for Node 0: 0 Jan 16 23:58:48.889237 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 16 23:58:48.889244 kernel: Policy zone: Normal Jan 16 23:58:48.889251 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:58:48.889259 kernel: software IO TLB: area num 2. Jan 16 23:58:48.889266 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 16 23:58:48.889274 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 16 23:58:48.889281 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:58:48.889287 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:58:48.889295 kernel: rcu: RCU event tracing is enabled. Jan 16 23:58:48.889302 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:58:48.889309 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:58:48.889315 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:58:48.889322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:58:48.889329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:58:48.889336 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:58:48.889344 kernel: GICv3: 256 SPIs implemented Jan 16 23:58:48.889351 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:58:48.889357 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:58:48.889364 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 23:58:48.889371 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 23:58:48.889378 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 23:58:48.889385 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:58:48.889391 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:58:48.889398 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 16 23:58:48.889405 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 16 23:58:48.889412 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:58:48.889421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:58:48.889428 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 23:58:48.889435 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 23:58:48.889442 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 23:58:48.889449 kernel: Console: colour dummy device 80x25 Jan 16 23:58:48.889456 kernel: ACPI: Core revision 20230628 Jan 16 23:58:48.889463 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 23:58:48.889470 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:58:48.889477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:58:48.889484 kernel: landlock: Up and running. Jan 16 23:58:48.889492 kernel: SELinux: Initializing. Jan 16 23:58:48.889499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:58:48.889506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:58:48.889514 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:58:48.889521 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:58:48.889528 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:58:48.889535 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:58:48.889542 kernel: Platform MSI: ITS@0x8080000 domain created Jan 16 23:58:48.889549 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 16 23:58:48.889557 kernel: Remapping and enabling EFI services. Jan 16 23:58:48.889565 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:58:48.889572 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:58:48.889579 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 23:58:48.889586 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 16 23:58:48.889593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 23:58:48.889600 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 23:58:48.889607 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:58:48.889614 kernel: SMP: Total of 2 processors activated. Jan 16 23:58:48.889621 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:58:48.889629 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 23:58:48.889637 kernel: CPU features: detected: Common not Private translations Jan 16 23:58:48.889648 kernel: CPU features: detected: CRC32 instructions Jan 16 23:58:48.889657 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 23:58:48.889664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 23:58:48.889672 kernel: CPU features: detected: LSE atomic instructions Jan 16 23:58:48.889680 kernel: CPU features: detected: Privileged Access Never Jan 16 23:58:48.889687 kernel: CPU features: detected: RAS Extension Support Jan 16 23:58:48.889696 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 23:58:48.889704 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:58:48.889711 kernel: alternatives: applying system-wide alternatives Jan 16 23:58:48.889718 kernel: devtmpfs: initialized Jan 16 23:58:48.889726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:58:48.889734 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:58:48.889741 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:58:48.889748 kernel: SMBIOS 3.0.0 present. Jan 16 23:58:48.889757 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 23:58:48.889765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:58:48.889772 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:58:48.889780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:58:48.889787 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:58:48.889795 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:58:48.889802 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 16 23:58:48.889809 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:58:48.889817 kernel: cpuidle: using governor menu Jan 16 23:58:48.889826 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:58:48.889833 kernel: ASID allocator initialised with 32768 entries Jan 16 23:58:48.889841 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:58:48.889848 kernel: Serial: AMBA PL011 UART driver Jan 16 23:58:48.889890 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 23:58:48.889898 kernel: Modules: 0 pages in range for non-PLT usage Jan 16 23:58:48.889905 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:58:48.889913 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:58:48.889920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:58:48.889930 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:58:48.889937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:58:48.889993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:58:48.890001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:58:48.890009 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:58:48.890016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:58:48.890024 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:58:48.890031 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:58:48.890039 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:58:48.890050 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:58:48.890057 kernel: ACPI: Interpreter enabled Jan 16 23:58:48.890065 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:58:48.890072 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:58:48.890080 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 23:58:48.890087 kernel: printk: console [ttyAMA0] enabled Jan 16 23:58:48.890097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 23:58:48.890255 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:58:48.890332 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:58:48.890402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:58:48.890481 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 23:58:48.890552 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 23:58:48.890562 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 23:58:48.890570 kernel: PCI host bridge to bus 0000:00 Jan 16 23:58:48.890641 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 23:58:48.890702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:58:48.890765 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 23:58:48.890824 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 23:58:48.893787 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 16 23:58:48.893982 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 16 23:58:48.894060 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 16 23:58:48.894128 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:58:48.894212 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.894280 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 16 23:58:48.894359 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.894427 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 16 23:58:48.894500 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.894566 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 16 23:58:48.894646 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.894728 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 16 23:58:48.894807 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.896610 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 16 23:58:48.896708 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.896781 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 16 23:58:48.896920 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.897001 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 16 23:58:48.897540 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.897620 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 16 23:58:48.897694 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 16 23:58:48.897761 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 16 23:58:48.897845 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 16 23:58:48.899043 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 16 23:58:48.899133 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:58:48.899205 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 16 23:58:48.899275 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:58:48.899346 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:58:48.899422 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 16 23:58:48.899498 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 16 23:58:48.899575 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 16 23:58:48.899646 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 16 23:58:48.899715 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 23:58:48.899791 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 16 23:58:48.900965 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 23:58:48.901066 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 16 23:58:48.901136 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 16 23:58:48.901206 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 23:58:48.901283 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 16 23:58:48.901353 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 16 23:58:48.901422 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:58:48.901501 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 16 23:58:48.901569 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 16 23:58:48.901637 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 23:58:48.901707 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 16 23:58:48.901777 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 23:58:48.901844 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:58:48.903071 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 23:58:48.903156 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 23:58:48.903223 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 23:58:48.903288 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 23:58:48.903359 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 23:58:48.903441 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:58:48.903511 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 23:58:48.903580 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 23:58:48.903646 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 23:58:48.903715 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 23:58:48.903786 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 23:58:48.904705 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:58:48.904818 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 23:58:48.904981 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 23:58:48.905058 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:58:48.905126 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 23:58:48.905200 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 23:58:48.905269 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:58:48.905336 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 23:58:48.905406 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 23:58:48.905472 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:58:48.905536 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 23:58:48.905605 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 23:58:48.905671 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:58:48.905739 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 23:58:48.905806 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 16 23:58:48.905940 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:58:48.906015 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 16 23:58:48.906081 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:58:48.906150 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 16 23:58:48.906222 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:58:48.906289 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 16 23:58:48.906354 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:58:48.906421 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 16 23:58:48.906488 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:58:48.906554 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 16 23:58:48.906620 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:58:48.906689 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 16 23:58:48.906757 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:58:48.906825 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 16 23:58:48.906929 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:58:48.907006 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 16 23:58:48.907073 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:58:48.907145 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 16 23:58:48.907215 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 16 23:58:48.907282 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 16 23:58:48.907347 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 16 23:58:48.907413 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 16 23:58:48.907478 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 16 23:58:48.907544 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 16 23:58:48.907610 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 16 23:58:48.907677 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 16 23:58:48.907745 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 16 23:58:48.907811 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 16 23:58:48.907985 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 16 23:58:48.908060 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 16 23:58:48.908126 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 16 23:58:48.908192 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 16 23:58:48.908258 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 16 23:58:48.908323 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 16 23:58:48.908392 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 16 23:58:48.908457 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 16 23:58:48.908524 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 16 23:58:48.908591 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 16 23:58:48.908664 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 16 23:58:48.908731 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 23:58:48.908798 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 16 23:58:48.908941 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 23:58:48.909026 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 23:58:48.909092 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 23:58:48.909228 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:58:48.909310 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 16 23:58:48.909404 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 23:58:48.909473 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 23:58:48.909567 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 23:58:48.909639 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:58:48.909715 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 23:58:48.909825 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 16 23:58:48.910006 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 23:58:48.910082 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 23:58:48.910156 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 23:58:48.910221 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:58:48.910294 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 23:58:48.910365 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 23:58:48.910430 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 23:58:48.910496 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 23:58:48.910562 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:58:48.910636 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 16 23:58:48.910708 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 16 23:58:48.910774 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 23:58:48.910840 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 23:58:48.910969 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 23:58:48.911040 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:58:48.911113 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 16 23:58:48.911181 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 16 23:58:48.911246 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 23:58:48.911318 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 23:58:48.911383 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 23:58:48.911450 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:58:48.911520 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 16 23:58:48.911588 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 16 23:58:48.911655 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 16 23:58:48.911721 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 23:58:48.911787 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 23:58:48.911876 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 23:58:48.911964 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:58:48.912034 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 23:58:48.912099 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 23:58:48.912164 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 23:58:48.912230 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:58:48.912299 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 23:58:48.912364 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 23:58:48.912449 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 23:58:48.912519 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:58:48.912586 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 23:58:48.912645 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:58:48.912703 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 23:58:48.912778 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 23:58:48.912842 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 23:58:48.914058 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 23:58:48.914142 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 23:58:48.914204 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 23:58:48.914263 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 23:58:48.914332 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 23:58:48.914392 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 23:58:48.914457 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 23:58:48.914525 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 23:58:48.914585 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 23:58:48.914661 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 23:58:48.914731 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 23:58:48.916013 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 23:58:48.916090 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 23:58:48.916167 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 23:58:48.916229 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 23:58:48.916291 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 23:58:48.916359 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 23:58:48.916424 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 23:58:48.916485 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 23:58:48.916552 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 23:58:48.916613 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 23:58:48.916673 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 23:58:48.916741 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 23:58:48.916802 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 23:58:48.917974 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 23:58:48.917996 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:58:48.918004 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:58:48.918013 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:58:48.918025 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:58:48.918034 kernel: iommu: Default domain type: Translated Jan 16 23:58:48.918042 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:58:48.918052 kernel: efivars: Registered efivars operations Jan 16 23:58:48.918060 kernel: vgaarb: loaded Jan 16 23:58:48.918071 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:58:48.918079 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:58:48.918087 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:58:48.918094 kernel: pnp: PnP ACPI init Jan 16 23:58:48.918186 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 23:58:48.918198 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:58:48.918206 kernel: NET: Registered PF_INET protocol family Jan 16 23:58:48.918214 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:58:48.918225 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:58:48.918233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:58:48.918241 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:58:48.918249 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:58:48.918258 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:58:48.918266 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:58:48.918296 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:58:48.918306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:58:48.918389 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 23:58:48.918403 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:58:48.918411 kernel: kvm [1]: HYP mode not available Jan 16 23:58:48.918419 kernel: Initialise system trusted keyrings Jan 16 23:58:48.918427 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:58:48.918435 kernel: Key type asymmetric registered Jan 16 23:58:48.918442 kernel: Asymmetric key parser 'x509' registered Jan 16 23:58:48.918450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:58:48.918458 kernel: io scheduler mq-deadline registered Jan 16 23:58:48.918466 kernel: io scheduler kyber registered Jan 16 23:58:48.918475 kernel: io scheduler bfq registered Jan 16 23:58:48.918483 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:58:48.918553 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 23:58:48.918620 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 23:58:48.918689 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.918769 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 23:58:48.918843 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 23:58:48.918954 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.919030 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 23:58:48.919098 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 23:58:48.919164 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.919233 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 23:58:48.919350 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 23:58:48.919447 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.919520 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 23:58:48.919590 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 23:58:48.919659 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.919728 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 23:58:48.919796 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 23:58:48.920977 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.921073 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 23:58:48.921142 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 23:58:48.921209 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.921278 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 23:58:48.921344 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 23:58:48.921416 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.921427 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 23:58:48.921493 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 23:58:48.921559 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 23:58:48.921624 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 23:58:48.921635 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:58:48.921645 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:58:48.921653 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:58:48.921728 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 23:58:48.921802 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 23:58:48.921814 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:58:48.921824 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:58:48.922616 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 23:58:48.922637 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 23:58:48.922645 kernel: thunder_xcv, ver 1.0 Jan 16 23:58:48.922658 kernel: thunder_bgx, ver 1.0 Jan 16 23:58:48.922666 kernel: nicpf, ver 1.0 Jan 16 23:58:48.922674 kernel: nicvf, ver 1.0 Jan 16 23:58:48.922758 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:58:48.922822 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:58:48 UTC (1768607928) Jan 16 23:58:48.922832 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:58:48.922841 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 16 23:58:48.922849 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:58:48.923546 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:58:48.923555 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:58:48.923563 kernel: Segment Routing with IPv6 Jan 16 23:58:48.923571 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:58:48.923579 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:58:48.923587 kernel: Key type dns_resolver registered Jan 16 23:58:48.923595 kernel: registered taskstats version 1 Jan 16 23:58:48.923603 kernel: Loading compiled-in X.509 certificates Jan 16 23:58:48.923611 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:58:48.923621 kernel: Key type .fscrypt registered Jan 16 23:58:48.923629 kernel: Key type fscrypt-provisioning registered Jan 16 23:58:48.923637 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:58:48.923645 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:58:48.923653 kernel: ima: No architecture policies found Jan 16 23:58:48.923661 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:58:48.923669 kernel: clk: Disabling unused clocks Jan 16 23:58:48.923677 kernel: Freeing unused kernel memory: 39424K Jan 16 23:58:48.923685 kernel: Run /init as init process Jan 16 23:58:48.923695 kernel: with arguments: Jan 16 23:58:48.923703 kernel: /init Jan 16 23:58:48.923710 kernel: with environment: Jan 16 23:58:48.923717 kernel: HOME=/ Jan 16 23:58:48.923725 kernel: TERM=linux Jan 16 23:58:48.923735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:58:48.923745 systemd[1]: Detected virtualization kvm. Jan 16 23:58:48.923754 systemd[1]: Detected architecture arm64. Jan 16 23:58:48.923764 systemd[1]: Running in initrd. Jan 16 23:58:48.923772 systemd[1]: No hostname configured, using default hostname. Jan 16 23:58:48.923780 systemd[1]: Hostname set to . Jan 16 23:58:48.923788 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:58:48.923796 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:58:48.923804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:58:48.923813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:58:48.923822 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:58:48.923833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:58:48.923841 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:58:48.923850 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:58:48.923890 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:58:48.923899 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:58:48.923907 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:58:48.924463 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:58:48.924478 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:58:48.924487 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:58:48.924496 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:58:48.924504 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:58:48.924513 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:58:48.924521 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:58:48.924531 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:58:48.924540 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:58:48.924548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:58:48.924558 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:58:48.924617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:58:48.924631 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:58:48.924639 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:58:48.924648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:58:48.924656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:58:48.924664 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:58:48.924673 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:58:48.924684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:58:48.924693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:48.924701 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:58:48.924710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:58:48.924718 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:58:48.924754 systemd-journald[236]: Collecting audit messages is disabled. Jan 16 23:58:48.924778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:58:48.924787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:58:48.924798 kernel: Bridge firewalling registered Jan 16 23:58:48.924806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:58:48.924815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:48.924823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:48.924832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:58:48.924840 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:58:48.924849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:58:48.924887 systemd-journald[236]: Journal started Jan 16 23:58:48.924910 systemd-journald[236]: Runtime Journal (/run/log/journal/1a6c28b95bc5461ca92a05206e3f9bf6) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:58:48.880815 systemd-modules-load[237]: Inserted module 'overlay' Jan 16 23:58:48.927238 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:58:48.895690 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 16 23:58:48.940972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:58:48.942229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:48.952028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:58:48.953751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:58:48.955729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:58:48.966150 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:58:48.975143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:58:48.981337 dracut-cmdline[273]: dracut-dracut-053 Jan 16 23:58:48.987005 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:58:49.012571 systemd-resolved[274]: Positive Trust Anchors: Jan 16 23:58:49.012586 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:58:49.012617 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:58:49.022970 systemd-resolved[274]: Defaulting to hostname 'linux'. Jan 16 23:58:49.024985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:58:49.026324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:58:49.060926 kernel: SCSI subsystem initialized Jan 16 23:58:49.064923 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:58:49.073103 kernel: iscsi: registered transport (tcp) Jan 16 23:58:49.085931 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:58:49.086008 kernel: QLogic iSCSI HBA Driver Jan 16 23:58:49.136576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:58:49.143154 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:58:49.165097 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:58:49.165192 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:58:49.165948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:58:49.214951 kernel: raid6: neonx8 gen() 15651 MB/s Jan 16 23:58:49.231949 kernel: raid6: neonx4 gen() 15502 MB/s Jan 16 23:58:49.248920 kernel: raid6: neonx2 gen() 13139 MB/s Jan 16 23:58:49.265960 kernel: raid6: neonx1 gen() 10374 MB/s Jan 16 23:58:49.282911 kernel: raid6: int64x8 gen() 6880 MB/s Jan 16 23:58:49.299920 kernel: raid6: int64x4 gen() 7246 MB/s Jan 16 23:58:49.316908 kernel: raid6: int64x2 gen() 6070 MB/s Jan 16 23:58:49.333958 kernel: raid6: int64x1 gen() 5028 MB/s Jan 16 23:58:49.334040 kernel: raid6: using algorithm neonx8 gen() 15651 MB/s Jan 16 23:58:49.350928 kernel: raid6: .... xor() 11775 MB/s, rmw enabled Jan 16 23:58:49.351035 kernel: raid6: using neon recovery algorithm Jan 16 23:58:49.355908 kernel: xor: measuring software checksum speed Jan 16 23:58:49.355959 kernel: 8regs : 19778 MB/sec Jan 16 23:58:49.357082 kernel: 32regs : 16063 MB/sec Jan 16 23:58:49.357118 kernel: arm64_neon : 27087 MB/sec Jan 16 23:58:49.357139 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 16 23:58:49.406930 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:58:49.422526 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:58:49.432151 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:58:49.445985 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 16 23:58:49.450064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:58:49.461064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:58:49.476608 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 16 23:58:49.516584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:58:49.524100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:58:49.583526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:58:49.592119 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:58:49.612194 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:58:49.614541 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:58:49.616606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:58:49.617363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:58:49.626540 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:58:49.643459 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:58:49.684907 kernel: scsi host0: Virtio SCSI HBA Jan 16 23:58:49.695092 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 23:58:49.695163 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 23:58:49.697383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:58:49.698630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:49.701162 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:49.703101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:58:49.704736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:49.706271 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:49.714674 kernel: ACPI: bus type USB registered Jan 16 23:58:49.715537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:49.718550 kernel: usbcore: registered new interface driver usbfs Jan 16 23:58:49.718579 kernel: usbcore: registered new interface driver hub Jan 16 23:58:49.718590 kernel: usbcore: registered new device driver usb Jan 16 23:58:49.729816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:49.736007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:58:49.745256 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 23:58:49.747309 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 23:58:49.747493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 23:58:49.749878 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 23:58:49.755342 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 23:58:49.755540 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 23:58:49.756040 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 23:58:49.756212 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 23:58:49.757151 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 23:58:49.761944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:58:49.761983 kernel: GPT:17805311 != 80003071 Jan 16 23:58:49.761994 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:58:49.762004 kernel: GPT:17805311 != 80003071 Jan 16 23:58:49.762013 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:58:49.762919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:58:49.763877 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 23:58:49.774893 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:58:49.775096 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 23:58:49.776195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:49.780903 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 23:58:49.781065 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 23:58:49.781894 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 23:58:49.784199 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 23:58:49.789447 kernel: hub 1-0:1.0: USB hub found Jan 16 23:58:49.789647 kernel: hub 1-0:1.0: 4 ports detected Jan 16 23:58:49.789740 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 23:58:49.790937 kernel: hub 2-0:1.0: USB hub found Jan 16 23:58:49.791113 kernel: hub 2-0:1.0: 4 ports detected Jan 16 23:58:49.815570 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (515) Jan 16 23:58:49.822886 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (521) Jan 16 23:58:49.825127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 23:58:49.836499 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 23:58:49.838481 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 16 23:58:49.843816 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 23:58:49.854535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:58:49.862422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:58:49.870342 disk-uuid[577]: Primary Header is updated. Jan 16 23:58:49.870342 disk-uuid[577]: Secondary Entries is updated. Jan 16 23:58:49.870342 disk-uuid[577]: Secondary Header is updated. Jan 16 23:58:49.878893 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:58:50.030946 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 23:58:50.167652 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 23:58:50.167714 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 23:58:50.169107 kernel: usbcore: registered new interface driver usbhid Jan 16 23:58:50.169140 kernel: usbhid: USB HID core driver Jan 16 23:58:50.272928 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 23:58:50.403907 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 23:58:50.458914 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 23:58:50.890938 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 23:58:50.892256 disk-uuid[578]: The operation has completed successfully. Jan 16 23:58:50.938361 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:58:50.939914 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:58:50.955079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:58:50.958782 sh[593]: Success Jan 16 23:58:50.971008 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:58:51.023562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:58:51.033101 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:58:51.034930 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:58:51.066816 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:58:51.066935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:58:51.066965 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:58:51.068310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:58:51.069213 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:58:51.075907 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:58:51.077653 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:58:51.080587 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:58:51.086083 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:58:51.092262 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:58:51.105730 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:58:51.105808 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:58:51.107283 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:58:51.111906 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:58:51.111954 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:58:51.121761 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:58:51.124308 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:58:51.130792 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:58:51.137957 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:58:51.237732 ignition[677]: Ignition 2.19.0 Jan 16 23:58:51.237740 ignition[677]: Stage: fetch-offline Jan 16 23:58:51.237775 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:51.238916 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:58:51.237784 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:51.241165 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:58:51.237974 ignition[677]: parsed url from cmdline: "" Jan 16 23:58:51.237978 ignition[677]: no config URL provided Jan 16 23:58:51.237983 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:58:51.237991 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:58:51.237995 ignition[677]: failed to fetch config: resource requires networking Jan 16 23:58:51.238169 ignition[677]: Ignition finished successfully Jan 16 23:58:51.250076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:58:51.271678 systemd-networkd[780]: lo: Link UP Jan 16 23:58:51.272398 systemd-networkd[780]: lo: Gained carrier Jan 16 23:58:51.274846 systemd-networkd[780]: Enumeration completed Jan 16 23:58:51.275486 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:58:51.276533 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:51.276536 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:58:51.277073 systemd[1]: Reached target network.target - Network. Jan 16 23:58:51.278715 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:51.278718 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:58:51.279289 systemd-networkd[780]: eth0: Link UP Jan 16 23:58:51.279292 systemd-networkd[780]: eth0: Gained carrier Jan 16 23:58:51.279299 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:51.287612 systemd-networkd[780]: eth1: Link UP Jan 16 23:58:51.287623 systemd-networkd[780]: eth1: Gained carrier Jan 16 23:58:51.287634 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:51.288077 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:58:51.303627 ignition[782]: Ignition 2.19.0 Jan 16 23:58:51.303646 ignition[782]: Stage: fetch Jan 16 23:58:51.303812 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:51.303832 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:51.303942 ignition[782]: parsed url from cmdline: "" Jan 16 23:58:51.303946 ignition[782]: no config URL provided Jan 16 23:58:51.303950 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:58:51.303958 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:58:51.303977 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 23:58:51.304600 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 16 23:58:51.326964 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.4/32 acquired from 10.0.0.1 Jan 16 23:58:51.358945 systemd-networkd[780]: eth0: DHCPv4 address 91.99.121.187/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:58:51.504865 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 16 23:58:51.511020 ignition[782]: GET result: OK Jan 16 23:58:51.511155 ignition[782]: parsing config with SHA512: cd28925d97138b393c40e2e0aebe21882568d320adbac8c0d577b451e493ce68c91d0cd4da3da89531ac1c592ce5f7fea1461887e8dfd84146ce130c4a3ca348 Jan 16 23:58:51.515634 unknown[782]: fetched base config from "system" Jan 16 23:58:51.515644 unknown[782]: fetched base config from "system" Jan 16 23:58:51.515952 ignition[782]: fetch: fetch complete Jan 16 23:58:51.515649 unknown[782]: fetched user config from "hetzner" Jan 16 23:58:51.515958 ignition[782]: fetch: fetch passed Jan 16 23:58:51.516004 ignition[782]: Ignition finished successfully Jan 16 23:58:51.519899 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:58:51.525043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:58:51.539793 ignition[790]: Ignition 2.19.0 Jan 16 23:58:51.539809 ignition[790]: Stage: kargs Jan 16 23:58:51.541052 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:51.541070 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:51.541897 ignition[790]: kargs: kargs passed Jan 16 23:58:51.541948 ignition[790]: Ignition finished successfully Jan 16 23:58:51.547935 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:58:51.554089 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:58:51.566794 ignition[797]: Ignition 2.19.0 Jan 16 23:58:51.566807 ignition[797]: Stage: disks Jan 16 23:58:51.567012 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:51.567092 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:51.567991 ignition[797]: disks: disks passed Jan 16 23:58:51.568045 ignition[797]: Ignition finished successfully Jan 16 23:58:51.571181 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:58:51.571910 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:58:51.573026 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:58:51.574325 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:58:51.575502 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:58:51.576514 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:58:51.582059 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:58:51.601279 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 16 23:58:51.605551 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:58:51.611040 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:58:51.655884 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:58:51.656324 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:58:51.657280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:58:51.669051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:58:51.673952 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:58:51.675691 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 23:58:51.677659 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:58:51.677687 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:58:51.686625 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:58:51.691595 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Jan 16 23:58:51.691643 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:58:51.691655 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:58:51.691665 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:58:51.693058 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:58:51.699447 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:58:51.699491 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:58:51.703537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:58:51.742566 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:58:51.749925 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:58:51.752287 coreos-metadata[816]: Jan 16 23:58:51.752 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 23:58:51.755131 coreos-metadata[816]: Jan 16 23:58:51.753 INFO Fetch successful Jan 16 23:58:51.755131 coreos-metadata[816]: Jan 16 23:58:51.754 INFO wrote hostname ci-4081-3-6-n-f88f5b170a to /sysroot/etc/hostname Jan 16 23:58:51.757928 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:58:51.758568 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:58:51.762602 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:58:51.864452 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:58:51.870967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:58:51.874124 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:58:51.884982 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:58:51.910376 ignition[932]: INFO : Ignition 2.19.0 Jan 16 23:58:51.910376 ignition[932]: INFO : Stage: mount Jan 16 23:58:51.913264 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:51.913264 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:51.913264 ignition[932]: INFO : mount: mount passed Jan 16 23:58:51.913264 ignition[932]: INFO : Ignition finished successfully Jan 16 23:58:51.916375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:58:51.917440 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:58:51.934058 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:58:52.065652 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:58:52.071081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:58:52.083232 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 16 23:58:52.083300 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:58:52.083326 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:58:52.084144 kernel: BTRFS info (device sda6): using free space tree Jan 16 23:58:52.087191 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 23:58:52.087235 kernel: BTRFS info (device sda6): auto enabling async discard Jan 16 23:58:52.091426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:58:52.112511 ignition[961]: INFO : Ignition 2.19.0 Jan 16 23:58:52.112511 ignition[961]: INFO : Stage: files Jan 16 23:58:52.113700 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:52.113700 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:52.113700 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:58:52.117029 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:58:52.117029 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:58:52.119257 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:58:52.119257 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:58:52.119257 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:58:52.117674 unknown[961]: wrote ssh authorized keys file for user: core Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:58:52.122633 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 23:58:52.427910 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 16 23:58:52.842926 systemd-networkd[780]: eth0: Gained IPv6LL Jan 16 23:58:52.904113 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:58:52.907508 ignition[961]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 16 23:58:52.909083 ignition[961]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:58:52.909083 ignition[961]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 23:58:52.909083 ignition[961]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 16 23:58:52.914602 ignition[961]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:58:52.914602 ignition[961]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:58:52.914602 ignition[961]: INFO : files: files passed Jan 16 23:58:52.914602 ignition[961]: INFO : Ignition finished successfully Jan 16 23:58:52.912104 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:58:52.921483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:58:52.924025 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:58:52.926194 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:58:52.926287 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:58:52.943783 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:58:52.943783 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:58:52.946115 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:58:52.947823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:58:52.949428 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:58:52.964254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:58:53.007419 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:58:53.007558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:58:53.010415 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:58:53.011604 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:58:53.013709 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:58:53.020144 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:58:53.039386 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:58:53.053148 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:58:53.067409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:58:53.068277 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:58:53.069608 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:58:53.070887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:58:53.071015 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:58:53.072652 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:58:53.073416 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:58:53.074677 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:58:53.075915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:58:53.077213 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:58:53.078600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:58:53.079870 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:58:53.081256 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:58:53.082585 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:58:53.083821 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:58:53.084802 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:58:53.084942 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:58:53.086395 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:58:53.087150 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:58:53.088414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:58:53.088491 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:58:53.089625 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:58:53.089743 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:58:53.091483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:58:53.091600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:58:53.092951 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:58:53.093045 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:58:53.094412 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 23:58:53.094507 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 23:58:53.102191 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:58:53.106826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:58:53.107341 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:58:53.107459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:58:53.111115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:58:53.111220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:58:53.118363 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:58:53.118454 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:58:53.126582 ignition[1014]: INFO : Ignition 2.19.0 Jan 16 23:58:53.126582 ignition[1014]: INFO : Stage: umount Jan 16 23:58:53.130084 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:58:53.130084 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 23:58:53.130084 ignition[1014]: INFO : umount: umount passed Jan 16 23:58:53.130084 ignition[1014]: INFO : Ignition finished successfully Jan 16 23:58:53.129946 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:58:53.130838 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:58:53.136722 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:58:53.137421 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:58:53.138899 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:58:53.140338 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:58:53.140425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:58:53.142033 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:58:53.142075 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:58:53.143065 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:58:53.143106 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:58:53.144045 systemd[1]: Stopped target network.target - Network. Jan 16 23:58:53.144933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:58:53.144982 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:58:53.146030 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:58:53.146862 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:58:53.149139 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:58:53.149808 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:58:53.150743 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:58:53.151877 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:58:53.151919 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:58:53.153390 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:58:53.153425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:58:53.154411 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:58:53.154458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:58:53.155490 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:58:53.155531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:58:53.156474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:58:53.156513 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:58:53.157726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:58:53.158614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:58:53.164180 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:58:53.164587 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:58:53.166493 systemd-networkd[780]: eth1: DHCPv6 lease lost Jan 16 23:58:53.167324 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:58:53.167383 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:58:53.169379 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 16 23:58:53.171007 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:58:53.171369 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:58:53.173071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:58:53.173109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:58:53.178013 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:58:53.179746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:58:53.179823 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:58:53.181634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:58:53.181677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:58:53.184840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:58:53.184951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:58:53.186124 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:58:53.198050 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:58:53.198163 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:58:53.213070 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:58:53.213372 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:58:53.216967 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:58:53.217029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:58:53.218348 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:58:53.218390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:58:53.219843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:58:53.219920 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:58:53.221976 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:58:53.222020 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:58:53.223570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:58:53.223611 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:58:53.230094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:58:53.230678 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:58:53.230729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:58:53.232994 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 23:58:53.233036 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:58:53.234507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:58:53.234548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:58:53.235307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:58:53.235349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:53.241832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:58:53.241947 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:58:53.243546 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:58:53.251988 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:58:53.258275 systemd[1]: Switching root. Jan 16 23:58:53.294947 systemd-journald[236]: Journal stopped Jan 16 23:58:54.230256 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 16 23:58:54.230336 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:58:54.230355 kernel: SELinux: policy capability open_perms=1 Jan 16 23:58:54.230367 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:58:54.230376 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:58:54.230386 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:58:54.230395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:58:54.230405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:58:54.230419 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:58:54.230431 kernel: audit: type=1403 audit(1768607933.443:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:58:54.230442 systemd[1]: Successfully loaded SELinux policy in 38.703ms. Jan 16 23:58:54.230462 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.859ms. Jan 16 23:58:54.230474 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:58:54.230485 systemd[1]: Detected virtualization kvm. Jan 16 23:58:54.230495 systemd[1]: Detected architecture arm64. Jan 16 23:58:54.230506 systemd[1]: Detected first boot. Jan 16 23:58:54.230516 systemd[1]: Hostname set to . Jan 16 23:58:54.230528 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:58:54.230539 zram_generator::config[1057]: No configuration found. Jan 16 23:58:54.230550 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:58:54.230560 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 23:58:54.230571 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 23:58:54.230581 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 23:58:54.230593 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:58:54.230603 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:58:54.230623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:58:54.230636 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:58:54.230651 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:58:54.230662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:58:54.230672 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:58:54.230687 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:58:54.230698 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:58:54.230709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:58:54.230719 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:58:54.230732 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:58:54.230747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:58:54.230758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:58:54.230777 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 23:58:54.230791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:58:54.230802 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 23:58:54.230812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 23:58:54.230825 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 23:58:54.230836 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:58:54.230846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:58:54.231898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:58:54.231917 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:58:54.231930 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:58:54.231941 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:58:54.231952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:58:54.231967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:58:54.231978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:58:54.231989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:58:54.231999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:58:54.232009 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:58:54.232019 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:58:54.232029 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:58:54.232040 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:58:54.232051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:58:54.232062 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:58:54.232073 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:58:54.232089 systemd[1]: Reached target machines.target - Containers. Jan 16 23:58:54.232101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:58:54.232112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:58:54.232123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:58:54.232133 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:58:54.232144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:58:54.232154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:58:54.232167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:58:54.232178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:58:54.232189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:58:54.232200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:58:54.232210 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 23:58:54.232220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 23:58:54.232231 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 23:58:54.232241 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 23:58:54.232252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:58:54.232263 kernel: fuse: init (API version 7.39) Jan 16 23:58:54.232273 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:58:54.232284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:58:54.232295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:58:54.232305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:58:54.232318 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 23:58:54.232332 systemd[1]: Stopped verity-setup.service. Jan 16 23:58:54.232343 kernel: ACPI: bus type drm_connector registered Jan 16 23:58:54.232354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:58:54.232365 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:58:54.232377 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:58:54.232388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:58:54.232401 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:58:54.232412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:58:54.232423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:58:54.232434 kernel: loop: module loaded Jan 16 23:58:54.232444 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:58:54.232454 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:58:54.232465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:58:54.232478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:58:54.232489 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:58:54.232499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:58:54.232512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:58:54.232522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:58:54.232537 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:58:54.232553 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:58:54.232563 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:58:54.232577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:58:54.232593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:58:54.232606 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:58:54.232617 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:58:54.232631 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:58:54.232669 systemd-journald[1124]: Collecting audit messages is disabled. Jan 16 23:58:54.232707 systemd-journald[1124]: Journal started Jan 16 23:58:54.232735 systemd-journald[1124]: Runtime Journal (/run/log/journal/1a6c28b95bc5461ca92a05206e3f9bf6) is 8.0M, max 76.6M, 68.6M free. Jan 16 23:58:54.235910 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:58:53.914569 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:58:53.939102 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 23:58:53.939642 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 23:58:54.248868 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:58:54.248941 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:58:54.250390 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:58:54.256887 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:58:54.266885 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:58:54.277684 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:58:54.278974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:58:54.283468 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:58:54.288889 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:58:54.300874 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:58:54.300942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:58:54.300960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:58:54.306954 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:58:54.311523 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:58:54.316890 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:58:54.320428 kernel: loop0: detected capacity change from 0 to 207008 Jan 16 23:58:54.323803 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:58:54.326388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:58:54.331097 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:58:54.332548 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:58:54.336184 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:58:54.340054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:58:54.355589 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:58:54.359514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:58:54.367240 kernel: loop1: detected capacity change from 0 to 8 Jan 16 23:58:54.374756 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 16 23:58:54.375079 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 16 23:58:54.383922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:58:54.389904 kernel: loop2: detected capacity change from 0 to 114328 Jan 16 23:58:54.393413 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:58:54.400122 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:58:54.404309 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:58:54.407790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:58:54.411168 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:58:54.421708 systemd-journald[1124]: Time spent on flushing to /var/log/journal/1a6c28b95bc5461ca92a05206e3f9bf6 is 64.407ms for 1121 entries. Jan 16 23:58:54.421708 systemd-journald[1124]: System Journal (/var/log/journal/1a6c28b95bc5461ca92a05206e3f9bf6) is 8.0M, max 584.8M, 576.8M free. Jan 16 23:58:54.503001 systemd-journald[1124]: Received client request to flush runtime journal. Jan 16 23:58:54.503058 kernel: loop3: detected capacity change from 0 to 114432 Jan 16 23:58:54.503074 kernel: loop4: detected capacity change from 0 to 207008 Jan 16 23:58:54.503086 kernel: loop5: detected capacity change from 0 to 8 Jan 16 23:58:54.503098 kernel: loop6: detected capacity change from 0 to 114328 Jan 16 23:58:54.462290 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 23:58:54.470112 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:58:54.472408 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:58:54.504988 kernel: loop7: detected capacity change from 0 to 114432 Jan 16 23:58:54.505238 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:58:54.516300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:58:54.527080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:58:54.528909 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 16 23:58:54.530692 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 16 23:58:54.541440 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:58:54.541456 systemd[1]: Reloading... Jan 16 23:58:54.563691 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 16 23:58:54.564382 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 16 23:58:54.647882 zram_generator::config[1225]: No configuration found. Jan 16 23:58:54.813511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:58:54.853795 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:58:54.878281 systemd[1]: Reloading finished in 336 ms. Jan 16 23:58:54.904031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:58:54.905341 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:58:54.906659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:58:54.918499 systemd[1]: Starting ensure-sysext.service... Jan 16 23:58:54.921788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:58:54.938931 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:58:54.938964 systemd[1]: Reloading... Jan 16 23:58:54.980264 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:58:54.980490 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:58:54.982234 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:58:54.982443 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 16 23:58:54.982508 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 16 23:58:54.988463 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:58:54.988477 systemd-tmpfiles[1264]: Skipping /boot Jan 16 23:58:54.997505 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:58:54.997521 systemd-tmpfiles[1264]: Skipping /boot Jan 16 23:58:55.017876 zram_generator::config[1288]: No configuration found. Jan 16 23:58:55.143240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:58:55.189329 systemd[1]: Reloading finished in 249 ms. Jan 16 23:58:55.209734 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:58:55.217936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:58:55.225076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:58:55.230041 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:58:55.234199 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:58:55.239112 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:58:55.244076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:58:55.246291 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:58:55.256271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:58:55.265226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:58:55.268295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:58:55.273354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:58:55.275050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:58:55.276961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:58:55.277117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:58:55.280138 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:58:55.285553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:58:55.293150 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:58:55.295026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:58:55.296887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:58:55.298265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:58:55.298394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:58:55.312108 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:58:55.313144 systemd[1]: Finished ensure-sysext.service. Jan 16 23:58:55.318033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:58:55.319910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:58:55.329293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:58:55.339180 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 23:58:55.341874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:58:55.343346 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:58:55.343940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:58:55.345837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:58:55.352304 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:58:55.357745 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:58:55.358547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:58:55.361396 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:58:55.365992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:58:55.370133 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:58:55.378777 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 16 23:58:55.390655 augenrules[1371]: No rules Jan 16 23:58:55.394469 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:58:55.408404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:58:55.422124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:58:55.487009 systemd-networkd[1382]: lo: Link UP Jan 16 23:58:55.487018 systemd-networkd[1382]: lo: Gained carrier Jan 16 23:58:55.487837 systemd-networkd[1382]: Enumeration completed Jan 16 23:58:55.488145 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:58:55.493028 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:58:55.493828 systemd-resolved[1333]: Positive Trust Anchors: Jan 16 23:58:55.493840 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:58:55.494229 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 23:58:55.495604 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:58:55.497107 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:58:55.505992 systemd-resolved[1333]: Using system hostname 'ci-4081-3-6-n-f88f5b170a'. Jan 16 23:58:55.509731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:58:55.510779 systemd[1]: Reached target network.target - Network. Jan 16 23:58:55.511636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:58:55.535951 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 16 23:58:55.562939 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 23:58:55.596762 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:55.596774 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:58:55.598713 systemd-networkd[1382]: eth0: Link UP Jan 16 23:58:55.598721 systemd-networkd[1382]: eth0: Gained carrier Jan 16 23:58:55.598738 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:55.637058 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 23:58:55.637182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:58:55.642061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:58:55.644026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:58:55.646700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:58:55.647520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:58:55.647555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:58:55.660979 systemd-networkd[1382]: eth0: DHCPv4 address 91.99.121.187/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 23:58:55.662141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:58:55.662298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:58:55.663659 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 16 23:58:55.667711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:58:55.667925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:58:55.668842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:58:55.671207 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:58:55.671366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:58:55.672267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:58:55.686894 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1395) Jan 16 23:58:55.707375 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:55.707389 systemd-networkd[1382]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:58:55.709292 systemd-networkd[1382]: eth1: Link UP Jan 16 23:58:55.709300 systemd-networkd[1382]: eth1: Gained carrier Jan 16 23:58:55.709319 systemd-networkd[1382]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:58:55.747247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:55.748936 systemd-networkd[1382]: eth1: DHCPv4 address 10.0.0.4/32 acquired from 10.0.0.1 Jan 16 23:58:55.758901 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 23:58:55.758970 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 23:58:55.758983 kernel: [drm] features: -context_init Jan 16 23:58:55.759899 kernel: [drm] number of scanouts: 1 Jan 16 23:58:55.759949 kernel: [drm] number of cap sets: 0 Jan 16 23:58:55.761887 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 16 23:58:55.772595 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 23:58:55.764305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 23:58:55.776161 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 23:58:55.785066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:58:55.793528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:58:55.794678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:55.802028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:58:55.802951 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:58:55.831901 systemd-timesyncd[1356]: Contacted time server 144.76.76.107:123 (0.flatcar.pool.ntp.org). Jan 16 23:58:55.832001 systemd-timesyncd[1356]: Initial clock synchronization to Fri 2026-01-16 23:58:55.926331 UTC. Jan 16 23:58:55.862470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:58:55.899141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:58:55.911335 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:58:55.923906 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:58:55.953337 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:58:55.955588 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:58:55.956681 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:58:55.957718 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:58:55.958612 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:58:55.959808 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:58:55.960641 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:58:55.961476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:58:55.962250 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:58:55.962286 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:58:55.962800 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:58:55.964949 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:58:55.967587 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:58:55.973075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:58:55.975155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:58:55.976435 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:58:55.977275 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:58:55.977955 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:58:55.978530 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:58:55.978563 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:58:55.981024 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:58:55.986046 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:58:55.991044 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:58:55.995083 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:58:56.001186 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:58:56.007069 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:58:56.007639 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:58:56.011695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:58:56.017120 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 23:58:56.022108 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:58:56.023055 jq[1453]: false Jan 16 23:58:56.028904 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:58:56.036499 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:58:56.040100 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:58:56.040749 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:58:56.044051 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:58:56.052401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:58:56.055343 dbus-daemon[1452]: [system] SELinux support is enabled Jan 16 23:58:56.056102 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:58:56.056944 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:58:56.063902 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:58:56.064944 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:58:56.068370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:58:56.068409 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:58:56.072208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:58:56.072236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:58:56.084792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:58:56.085024 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:58:56.092877 jq[1465]: true Jan 16 23:58:56.092667 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:58:56.093280 coreos-metadata[1451]: Jan 16 23:58:56.092 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 23:58:56.101858 coreos-metadata[1451]: Jan 16 23:58:56.101 INFO Fetch successful Jan 16 23:58:56.101858 coreos-metadata[1451]: Jan 16 23:58:56.101 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 23:58:56.104067 coreos-metadata[1451]: Jan 16 23:58:56.104 INFO Fetch successful Jan 16 23:58:56.116959 extend-filesystems[1456]: Found loop4 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found loop5 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found loop6 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found loop7 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda1 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda2 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda3 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found usr Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda4 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda6 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda7 Jan 16 23:58:56.116959 extend-filesystems[1456]: Found sda9 Jan 16 23:58:56.116959 extend-filesystems[1456]: Checking size of /dev/sda9 Jan 16 23:58:56.125118 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:58:56.125427 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:58:56.164925 jq[1481]: true Jan 16 23:58:56.152892 systemd-logind[1462]: New seat seat0. Jan 16 23:58:56.167321 extend-filesystems[1456]: Resized partition /dev/sda9 Jan 16 23:58:56.171976 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 16 23:58:56.158156 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:58:56.172186 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:58:56.158171 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 23:58:56.178855 update_engine[1463]: I20260116 23:58:56.176590 1463 main.cc:92] Flatcar Update Engine starting Jan 16 23:58:56.158359 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:58:56.189849 update_engine[1463]: I20260116 23:58:56.188602 1463 update_check_scheduler.cc:74] Next update check in 5m36s Jan 16 23:58:56.192347 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:58:56.196302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:58:56.228663 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:58:56.231571 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:58:56.270169 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:58:56.274927 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:58:56.284217 systemd[1]: Starting sshkeys.service... Jan 16 23:58:56.327753 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:58:56.339199 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1384) Jan 16 23:58:56.340915 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:58:56.342887 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 16 23:58:56.373235 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 23:58:56.373235 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 23:58:56.373235 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 16 23:58:56.383068 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Jan 16 23:58:56.383068 extend-filesystems[1456]: Found sr0 Jan 16 23:58:56.374434 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:58:56.393973 containerd[1472]: time="2026-01-16T23:58:56.386575088Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:58:56.374621 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:58:56.435189 coreos-metadata[1524]: Jan 16 23:58:56.434 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 23:58:56.436631 coreos-metadata[1524]: Jan 16 23:58:56.436 INFO Fetch successful Jan 16 23:58:56.439082 unknown[1524]: wrote ssh authorized keys file for user: core Jan 16 23:58:56.457926 containerd[1472]: time="2026-01-16T23:58:56.456506075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.459649 containerd[1472]: time="2026-01-16T23:58:56.459607702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:58:56.459741 containerd[1472]: time="2026-01-16T23:58:56.459726815Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:58:56.462324 containerd[1472]: time="2026-01-16T23:58:56.459784247Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:58:56.462324 containerd[1472]: time="2026-01-16T23:58:56.462253399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:58:56.462324 containerd[1472]: time="2026-01-16T23:58:56.462282782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.464073 containerd[1472]: time="2026-01-16T23:58:56.464017264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:58:56.464073 containerd[1472]: time="2026-01-16T23:58:56.464042074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.464638 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:58:56.465707 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471552372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471580784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471597864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471608023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471693826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.471940713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.472051529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.472066180Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.472148624Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:58:56.472331 containerd[1472]: time="2026-01-16T23:58:56.472189178Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:58:56.474263 systemd[1]: Finished sshkeys.service. Jan 16 23:58:56.481076 containerd[1472]: time="2026-01-16T23:58:56.481047115Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:58:56.481203 containerd[1472]: time="2026-01-16T23:58:56.481188528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481284045Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481315857Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481335082Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481545057Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481875157Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481976785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.481992853Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482007302Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482025718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482038952Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482051175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482068660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482083028Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.482896 containerd[1472]: time="2026-01-16T23:58:56.482094805Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482109092Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482121356Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482140419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482153937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482166645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482179152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482193236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482205459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482217197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482229946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482242290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482255727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482267829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483217 containerd[1472]: time="2026-01-16T23:58:56.482280092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482292639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482309597Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482330198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482342421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482353713Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482512449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482533212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482544180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482555917Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482565509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482579108Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482588903Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:58:56.483467 containerd[1472]: time="2026-01-16T23:58:56.482599507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:58:56.486082 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:58:56.487082 containerd[1472]: time="2026-01-16T23:58:56.486033906Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:58:56.487082 containerd[1472]: time="2026-01-16T23:58:56.486117483Z" level=info msg="Connect containerd service" Jan 16 23:58:56.487082 containerd[1472]: time="2026-01-16T23:58:56.486168196Z" level=info msg="using legacy CRI server" Jan 16 23:58:56.487082 containerd[1472]: time="2026-01-16T23:58:56.486177181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:58:56.487082 containerd[1472]: time="2026-01-16T23:58:56.486280023Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.487667608Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.487902718Z" level=info msg="Start subscribing containerd event" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.487972574Z" level=info msg="Start recovering state" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.488056921Z" level=info msg="Start event monitor" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.488069103Z" level=info msg="Start snapshots syncer" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.488079748Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:58:56.488907 containerd[1472]: time="2026-01-16T23:58:56.488088773Z" level=info msg="Start streaming server" Jan 16 23:58:56.490178 containerd[1472]: time="2026-01-16T23:58:56.490145543Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:58:56.490223 containerd[1472]: time="2026-01-16T23:58:56.490208965Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:58:56.491846 containerd[1472]: time="2026-01-16T23:58:56.490992284Z" level=info msg="containerd successfully booted in 0.108250s" Jan 16 23:58:56.490337 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:58:56.581212 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:58:56.607958 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:58:56.615351 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:58:56.624242 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:58:56.624536 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:58:56.632364 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:58:56.645959 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:58:56.660847 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:58:56.668464 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 23:58:56.671263 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:58:57.386949 systemd-networkd[1382]: eth0: Gained IPv6LL Jan 16 23:58:57.392952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:58:57.394697 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:58:57.403177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:58:57.406342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:58:57.440160 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:58:57.642136 systemd-networkd[1382]: eth1: Gained IPv6LL Jan 16 23:58:58.199793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:58:58.202306 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:58:58.205435 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:58:58.208259 systemd[1]: Startup finished in 772ms (kernel) + 4.749s (initrd) + 4.803s (userspace) = 10.325s. Jan 16 23:58:58.704082 kubelet[1574]: E0116 23:58:58.704003 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:58:58.710075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:58:58.710551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:08.882656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:59:08.889121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:09.006018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:09.014231 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:09.070945 kubelet[1593]: E0116 23:59:09.070881 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:09.075942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:09.076085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:19.132743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:59:19.139152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:19.256047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:19.260444 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:19.301176 kubelet[1608]: E0116 23:59:19.301082 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:19.304753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:19.305069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:26.300137 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:59:26.305283 systemd[1]: Started sshd@0-91.99.121.187:22-4.153.228.146:51344.service - OpenSSH per-connection server daemon (4.153.228.146:51344). Jan 16 23:59:26.958666 sshd[1616]: Accepted publickey for core from 4.153.228.146 port 51344 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:26.961646 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:26.972591 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:59:26.979397 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:59:26.984847 systemd-logind[1462]: New session 1 of user core. Jan 16 23:59:26.990318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:59:27.000364 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:59:27.004569 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:59:27.117569 systemd[1620]: Queued start job for default target default.target. Jan 16 23:59:27.130790 systemd[1620]: Created slice app.slice - User Application Slice. Jan 16 23:59:27.130845 systemd[1620]: Reached target paths.target - Paths. Jan 16 23:59:27.130957 systemd[1620]: Reached target timers.target - Timers. Jan 16 23:59:27.133047 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:59:27.148417 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:59:27.148623 systemd[1620]: Reached target sockets.target - Sockets. Jan 16 23:59:27.148660 systemd[1620]: Reached target basic.target - Basic System. Jan 16 23:59:27.148727 systemd[1620]: Reached target default.target - Main User Target. Jan 16 23:59:27.148778 systemd[1620]: Startup finished in 136ms. Jan 16 23:59:27.148832 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:59:27.156191 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:59:27.625267 systemd[1]: Started sshd@1-91.99.121.187:22-4.153.228.146:51354.service - OpenSSH per-connection server daemon (4.153.228.146:51354). Jan 16 23:59:28.242432 sshd[1631]: Accepted publickey for core from 4.153.228.146 port 51354 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:28.244845 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:28.249688 systemd-logind[1462]: New session 2 of user core. Jan 16 23:59:28.264224 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:59:28.686266 sshd[1631]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:28.691841 systemd[1]: sshd@1-91.99.121.187:22-4.153.228.146:51354.service: Deactivated successfully. Jan 16 23:59:28.693653 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:59:28.695700 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:59:28.697610 systemd-logind[1462]: Removed session 2. Jan 16 23:59:28.804345 systemd[1]: Started sshd@2-91.99.121.187:22-4.153.228.146:51366.service - OpenSSH per-connection server daemon (4.153.228.146:51366). Jan 16 23:59:29.332898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 23:59:29.342224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:29.446232 sshd[1638]: Accepted publickey for core from 4.153.228.146 port 51366 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:29.448700 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:29.454902 systemd-logind[1462]: New session 3 of user core. Jan 16 23:59:29.458151 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:59:29.480101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:29.481029 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:29.520156 kubelet[1649]: E0116 23:59:29.520098 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:29.523586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:29.523795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:29.899378 sshd[1638]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:29.904574 systemd[1]: sshd@2-91.99.121.187:22-4.153.228.146:51366.service: Deactivated successfully. Jan 16 23:59:29.906219 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:59:29.907083 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:59:29.908527 systemd-logind[1462]: Removed session 3. Jan 16 23:59:30.016334 systemd[1]: Started sshd@3-91.99.121.187:22-4.153.228.146:51378.service - OpenSSH per-connection server daemon (4.153.228.146:51378). Jan 16 23:59:30.642602 sshd[1660]: Accepted publickey for core from 4.153.228.146 port 51378 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:30.644845 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:30.649393 systemd-logind[1462]: New session 4 of user core. Jan 16 23:59:30.657177 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:59:31.088542 sshd[1660]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:31.092916 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:59:31.093601 systemd[1]: sshd@3-91.99.121.187:22-4.153.228.146:51378.service: Deactivated successfully. Jan 16 23:59:31.095595 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:59:31.096789 systemd-logind[1462]: Removed session 4. Jan 16 23:59:31.194320 systemd[1]: Started sshd@4-91.99.121.187:22-4.153.228.146:51394.service - OpenSSH per-connection server daemon (4.153.228.146:51394). Jan 16 23:59:31.792815 sshd[1667]: Accepted publickey for core from 4.153.228.146 port 51394 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:31.794865 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:31.799979 systemd-logind[1462]: New session 5 of user core. Jan 16 23:59:31.806140 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:59:32.133754 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 23:59:32.134075 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:32.150139 sudo[1670]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:32.246720 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:32.250840 systemd[1]: sshd@4-91.99.121.187:22-4.153.228.146:51394.service: Deactivated successfully. Jan 16 23:59:32.252554 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 23:59:32.254148 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Jan 16 23:59:32.255764 systemd-logind[1462]: Removed session 5. Jan 16 23:59:32.359322 systemd[1]: Started sshd@5-91.99.121.187:22-4.153.228.146:51404.service - OpenSSH per-connection server daemon (4.153.228.146:51404). Jan 16 23:59:32.955608 sshd[1675]: Accepted publickey for core from 4.153.228.146 port 51404 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:32.958040 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:32.963071 systemd-logind[1462]: New session 6 of user core. Jan 16 23:59:32.969176 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 23:59:33.287383 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 23:59:33.288484 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:33.292462 sudo[1679]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:33.299325 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 23:59:33.299629 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:33.315603 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:33.318084 auditctl[1682]: No rules Jan 16 23:59:33.318422 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 23:59:33.318584 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:33.322716 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:33.355596 augenrules[1700]: No rules Jan 16 23:59:33.357936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:33.359421 sudo[1678]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:33.455320 sshd[1675]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:33.459674 systemd[1]: sshd@5-91.99.121.187:22-4.153.228.146:51404.service: Deactivated successfully. Jan 16 23:59:33.459695 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Jan 16 23:59:33.461850 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 23:59:33.464937 systemd-logind[1462]: Removed session 6. Jan 16 23:59:33.574016 systemd[1]: Started sshd@6-91.99.121.187:22-4.153.228.146:51418.service - OpenSSH per-connection server daemon (4.153.228.146:51418). Jan 16 23:59:34.228931 sshd[1708]: Accepted publickey for core from 4.153.228.146 port 51418 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 16 23:59:34.230815 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:34.236444 systemd-logind[1462]: New session 7 of user core. Jan 16 23:59:34.244242 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 23:59:34.583622 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:59:34.583969 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:35.199133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:35.206654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:35.244355 systemd[1]: Reloading requested from client PID 1744 ('systemctl') (unit session-7.scope)... Jan 16 23:59:35.244370 systemd[1]: Reloading... Jan 16 23:59:35.358888 zram_generator::config[1789]: No configuration found. Jan 16 23:59:35.452176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:35.520833 systemd[1]: Reloading finished in 276 ms. Jan 16 23:59:35.575121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:35.578501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:35.581839 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 23:59:35.582382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:35.591584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:35.713926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:35.723423 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:59:35.772156 kubelet[1833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:35.772589 kubelet[1833]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:59:35.772660 kubelet[1833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:35.772985 kubelet[1833]: I0116 23:59:35.772934 1833 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:59:36.772403 kubelet[1833]: I0116 23:59:36.772352 1833 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:59:36.772931 kubelet[1833]: I0116 23:59:36.772911 1833 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:59:36.773483 kubelet[1833]: I0116 23:59:36.773456 1833 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:59:36.811941 kubelet[1833]: I0116 23:59:36.811894 1833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:59:36.820267 kubelet[1833]: E0116 23:59:36.820186 1833 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:59:36.820267 kubelet[1833]: I0116 23:59:36.820224 1833 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:59:36.822904 kubelet[1833]: I0116 23:59:36.822878 1833 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:59:36.823826 kubelet[1833]: I0116 23:59:36.823768 1833 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:59:36.824023 kubelet[1833]: I0116 23:59:36.823822 1833 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 23:59:36.824108 kubelet[1833]: I0116 23:59:36.824093 1833 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:59:36.824108 kubelet[1833]: I0116 23:59:36.824102 1833 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:59:36.824407 kubelet[1833]: I0116 23:59:36.824372 1833 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:36.828336 kubelet[1833]: I0116 23:59:36.828096 1833 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:59:36.828336 kubelet[1833]: I0116 23:59:36.828132 1833 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:59:36.828336 kubelet[1833]: I0116 23:59:36.828149 1833 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:59:36.828336 kubelet[1833]: I0116 23:59:36.828159 1833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:59:36.831461 kubelet[1833]: E0116 23:59:36.831413 1833 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:36.831561 kubelet[1833]: E0116 23:59:36.831471 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:36.834892 kubelet[1833]: I0116 23:59:36.833834 1833 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:59:36.834892 kubelet[1833]: I0116 23:59:36.834602 1833 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:59:36.834892 kubelet[1833]: W0116 23:59:36.834719 1833 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:59:36.835848 kubelet[1833]: I0116 23:59:36.835826 1833 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:59:36.835975 kubelet[1833]: I0116 23:59:36.835963 1833 server.go:1287] "Started kubelet" Jan 16 23:59:36.837758 kubelet[1833]: I0116 23:59:36.837684 1833 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:59:36.839608 kubelet[1833]: I0116 23:59:36.839564 1833 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:59:36.839907 kubelet[1833]: I0116 23:59:36.839834 1833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:59:36.840404 kubelet[1833]: I0116 23:59:36.840380 1833 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:59:36.844625 kubelet[1833]: I0116 23:59:36.844593 1833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:59:36.845447 kubelet[1833]: I0116 23:59:36.845421 1833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:59:36.851895 kubelet[1833]: E0116 23:59:36.850413 1833 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:59:36.851895 kubelet[1833]: E0116 23:59:36.850718 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:36.851895 kubelet[1833]: I0116 23:59:36.850749 1833 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:59:36.851895 kubelet[1833]: I0116 23:59:36.851319 1833 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:59:36.851895 kubelet[1833]: I0116 23:59:36.851404 1833 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:59:36.855161 kubelet[1833]: I0116 23:59:36.855093 1833 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:59:36.855285 kubelet[1833]: I0116 23:59:36.855261 1833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:59:36.857011 kubelet[1833]: I0116 23:59:36.856987 1833 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:59:36.871581 kubelet[1833]: E0116 23:59:36.870536 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.188b5b959e3a7ad0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2026-01-16 23:59:36.835939024 +0000 UTC m=+1.108047283,LastTimestamp:2026-01-16 23:59:36.835939024 +0000 UTC m=+1.108047283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 16 23:59:36.873810 kubelet[1833]: W0116 23:59:36.873781 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 16 23:59:36.873976 kubelet[1833]: E0116 23:59:36.873955 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 16 23:59:36.874166 kubelet[1833]: W0116 23:59:36.874147 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 16 23:59:36.874250 kubelet[1833]: E0116 23:59:36.874236 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 16 23:59:36.890291 kubelet[1833]: W0116 23:59:36.889292 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 16 23:59:36.890291 kubelet[1833]: E0116 23:59:36.889369 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 16 23:59:36.890291 kubelet[1833]: E0116 23:59:36.889449 1833 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 16 23:59:36.890291 kubelet[1833]: E0116 23:59:36.889583 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.188b5b959f1723da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2026-01-16 23:59:36.850400218 +0000 UTC m=+1.122508437,LastTimestamp:2026-01-16 23:59:36.850400218 +0000 UTC m=+1.122508437,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 16 23:59:36.892527 kubelet[1833]: I0116 23:59:36.892052 1833 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:59:36.892527 kubelet[1833]: I0116 23:59:36.892069 1833 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:59:36.892527 kubelet[1833]: I0116 23:59:36.892087 1833 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:36.893391 kubelet[1833]: E0116 23:59:36.893182 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.188b5b95a0a9ecf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.4 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2026-01-16 23:59:36.876797173 +0000 UTC m=+1.148905392,LastTimestamp:2026-01-16 23:59:36.876797173 +0000 UTC m=+1.148905392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 16 23:59:36.895390 kubelet[1833]: I0116 23:59:36.895082 1833 policy_none.go:49] "None policy: Start" Jan 16 23:59:36.895390 kubelet[1833]: I0116 23:59:36.895130 1833 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:59:36.895390 kubelet[1833]: I0116 23:59:36.895146 1833 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:59:36.905603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 23:59:36.919067 kubelet[1833]: I0116 23:59:36.919015 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:59:36.920473 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 23:59:36.923255 kubelet[1833]: I0116 23:59:36.923163 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:59:36.923255 kubelet[1833]: I0116 23:59:36.923189 1833 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:59:36.923729 kubelet[1833]: I0116 23:59:36.923207 1833 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:59:36.923729 kubelet[1833]: I0116 23:59:36.923411 1833 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:59:36.923729 kubelet[1833]: E0116 23:59:36.923517 1833 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:59:36.927322 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 23:59:36.942881 kubelet[1833]: I0116 23:59:36.942820 1833 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:59:36.943244 kubelet[1833]: I0116 23:59:36.943213 1833 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:59:36.943308 kubelet[1833]: I0116 23:59:36.943245 1833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:59:36.944689 kubelet[1833]: I0116 23:59:36.944570 1833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:59:36.945816 kubelet[1833]: E0116 23:59:36.945758 1833 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:59:36.945816 kubelet[1833]: E0116 23:59:36.945822 1833 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Jan 16 23:59:37.045232 kubelet[1833]: I0116 23:59:37.045076 1833 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.4" Jan 16 23:59:37.051413 kubelet[1833]: I0116 23:59:37.051379 1833 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.4" Jan 16 23:59:37.051413 kubelet[1833]: E0116 23:59:37.051414 1833 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Jan 16 23:59:37.085207 kubelet[1833]: E0116 23:59:37.085130 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.186324 kubelet[1833]: E0116 23:59:37.186243 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.261254 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:37.287099 kubelet[1833]: E0116 23:59:37.287026 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.365125 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:37.370808 systemd[1]: sshd@6-91.99.121.187:22-4.153.228.146:51418.service: Deactivated successfully. Jan 16 23:59:37.373136 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 23:59:37.374310 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Jan 16 23:59:37.375312 systemd-logind[1462]: Removed session 7. Jan 16 23:59:37.387784 kubelet[1833]: E0116 23:59:37.387721 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.488521 kubelet[1833]: E0116 23:59:37.488444 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.589500 kubelet[1833]: E0116 23:59:37.589435 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.690557 kubelet[1833]: E0116 23:59:37.690479 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.779821 kubelet[1833]: I0116 23:59:37.779518 1833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 16 23:59:37.779821 kubelet[1833]: W0116 23:59:37.779765 1833 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 16 23:59:37.790743 kubelet[1833]: E0116 23:59:37.790667 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.832333 kubelet[1833]: E0116 23:59:37.832247 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:37.891563 kubelet[1833]: E0116 23:59:37.891478 1833 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 16 23:59:37.993609 kubelet[1833]: I0116 23:59:37.993434 1833 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 16 23:59:37.994341 containerd[1472]: time="2026-01-16T23:59:37.993915582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 23:59:37.994921 kubelet[1833]: I0116 23:59:37.994722 1833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 16 23:59:38.833057 kubelet[1833]: I0116 23:59:38.833006 1833 apiserver.go:52] "Watching apiserver" Jan 16 23:59:38.833918 kubelet[1833]: E0116 23:59:38.833395 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:38.841060 kubelet[1833]: E0116 23:59:38.840746 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:38.848191 systemd[1]: Created slice kubepods-besteffort-podd9699256_4015_4bbc_8333_76427ca5ab87.slice - libcontainer container kubepods-besteffort-podd9699256_4015_4bbc_8333_76427ca5ab87.slice. Jan 16 23:59:38.852838 kubelet[1833]: I0116 23:59:38.851929 1833 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:59:38.865908 kubelet[1833]: I0116 23:59:38.864576 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-var-lib-calico\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.865908 kubelet[1833]: I0116 23:59:38.864619 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-var-run-calico\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.865908 kubelet[1833]: I0116 23:59:38.864645 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac0c5618-fb0f-49f6-8265-d8bd0ced516e-registration-dir\") pod \"csi-node-driver-ch2ph\" (UID: \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\") " pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:38.865908 kubelet[1833]: I0116 23:59:38.864664 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ac0c5618-fb0f-49f6-8265-d8bd0ced516e-varrun\") pod \"csi-node-driver-ch2ph\" (UID: \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\") " pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:38.865908 kubelet[1833]: I0116 23:59:38.864685 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st78v\" (UniqueName: \"kubernetes.io/projected/ac0c5618-fb0f-49f6-8265-d8bd0ced516e-kube-api-access-st78v\") pod \"csi-node-driver-ch2ph\" (UID: \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\") " pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:38.866095 kubelet[1833]: I0116 23:59:38.864704 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f88121c-f911-41e5-af7b-5ef9149d724d-lib-modules\") pod \"kube-proxy-n8m9h\" (UID: \"4f88121c-f911-41e5-af7b-5ef9149d724d\") " pod="kube-system/kube-proxy-n8m9h" Jan 16 23:59:38.866095 kubelet[1833]: I0116 23:59:38.864724 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-cni-bin-dir\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866095 kubelet[1833]: I0116 23:59:38.864741 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac0c5618-fb0f-49f6-8265-d8bd0ced516e-socket-dir\") pod \"csi-node-driver-ch2ph\" (UID: \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\") " pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:38.866095 kubelet[1833]: I0116 23:59:38.864770 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9699256-4015-4bbc-8333-76427ca5ab87-tigera-ca-bundle\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866095 kubelet[1833]: I0116 23:59:38.864788 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-policysync\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866196 kubelet[1833]: I0116 23:59:38.864808 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f88121c-f911-41e5-af7b-5ef9149d724d-kube-proxy\") pod \"kube-proxy-n8m9h\" (UID: \"4f88121c-f911-41e5-af7b-5ef9149d724d\") " pod="kube-system/kube-proxy-n8m9h" Jan 16 23:59:38.866196 kubelet[1833]: I0116 23:59:38.864836 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f88121c-f911-41e5-af7b-5ef9149d724d-xtables-lock\") pod \"kube-proxy-n8m9h\" (UID: \"4f88121c-f911-41e5-af7b-5ef9149d724d\") " pod="kube-system/kube-proxy-n8m9h" Jan 16 23:59:38.866196 kubelet[1833]: I0116 23:59:38.864867 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-flexvol-driver-host\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866196 kubelet[1833]: I0116 23:59:38.864887 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-cni-net-dir\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866196 kubelet[1833]: I0116 23:59:38.864906 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-lib-modules\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866312 kubelet[1833]: I0116 23:59:38.864925 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9699256-4015-4bbc-8333-76427ca5ab87-node-certs\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866312 kubelet[1833]: I0116 23:59:38.864942 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-xtables-lock\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866312 kubelet[1833]: I0116 23:59:38.864971 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk6ml\" (UniqueName: \"kubernetes.io/projected/d9699256-4015-4bbc-8333-76427ca5ab87-kube-api-access-mk6ml\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.866312 kubelet[1833]: I0116 23:59:38.864990 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac0c5618-fb0f-49f6-8265-d8bd0ced516e-kubelet-dir\") pod \"csi-node-driver-ch2ph\" (UID: \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\") " pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:38.866312 kubelet[1833]: I0116 23:59:38.865010 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhvl\" (UniqueName: \"kubernetes.io/projected/4f88121c-f911-41e5-af7b-5ef9149d724d-kube-api-access-chhvl\") pod \"kube-proxy-n8m9h\" (UID: \"4f88121c-f911-41e5-af7b-5ef9149d724d\") " pod="kube-system/kube-proxy-n8m9h" Jan 16 23:59:38.866436 kubelet[1833]: I0116 23:59:38.865029 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9699256-4015-4bbc-8333-76427ca5ab87-cni-log-dir\") pod \"calico-node-pmq49\" (UID: \"d9699256-4015-4bbc-8333-76427ca5ab87\") " pod="calico-system/calico-node-pmq49" Jan 16 23:59:38.867836 systemd[1]: Created slice kubepods-besteffort-pod4f88121c_f911_41e5_af7b_5ef9149d724d.slice - libcontainer container kubepods-besteffort-pod4f88121c_f911_41e5_af7b_5ef9149d724d.slice. Jan 16 23:59:38.971891 kubelet[1833]: E0116 23:59:38.971411 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:38.971891 kubelet[1833]: W0116 23:59:38.971434 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:38.971891 kubelet[1833]: E0116 23:59:38.971455 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:38.972351 kubelet[1833]: E0116 23:59:38.972333 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:38.976304 kubelet[1833]: W0116 23:59:38.973914 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:38.976304 kubelet[1833]: E0116 23:59:38.973953 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:38.976304 kubelet[1833]: E0116 23:59:38.974203 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:38.976304 kubelet[1833]: W0116 23:59:38.974214 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:38.976304 kubelet[1833]: E0116 23:59:38.974224 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:39.007507 kubelet[1833]: E0116 23:59:39.007477 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:39.007774 kubelet[1833]: W0116 23:59:39.007669 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:39.007774 kubelet[1833]: E0116 23:59:39.007699 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:39.015314 kubelet[1833]: E0116 23:59:39.015160 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:39.015314 kubelet[1833]: W0116 23:59:39.015184 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:39.015314 kubelet[1833]: E0116 23:59:39.015223 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:39.015738 kubelet[1833]: E0116 23:59:39.015680 1833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 23:59:39.015738 kubelet[1833]: W0116 23:59:39.015694 1833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 23:59:39.015738 kubelet[1833]: E0116 23:59:39.015710 1833 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 23:59:39.168235 containerd[1472]: time="2026-01-16T23:59:39.167673227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmq49,Uid:d9699256-4015-4bbc-8333-76427ca5ab87,Namespace:calico-system,Attempt:0,}" Jan 16 23:59:39.170610 containerd[1472]: time="2026-01-16T23:59:39.170573414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8m9h,Uid:4f88121c-f911-41e5-af7b-5ef9149d724d,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:39.827265 containerd[1472]: time="2026-01-16T23:59:39.827170358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:39.828309 containerd[1472]: time="2026-01-16T23:59:39.828263138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 16 23:59:39.829063 containerd[1472]: time="2026-01-16T23:59:39.829030609Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:39.831007 containerd[1472]: time="2026-01-16T23:59:39.830686801Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:39.831007 containerd[1472]: time="2026-01-16T23:59:39.830937064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:59:39.833985 containerd[1472]: time="2026-01-16T23:59:39.833948060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:39.834536 kubelet[1833]: E0116 23:59:39.834505 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:39.836805 containerd[1472]: time="2026-01-16T23:59:39.834838782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 667.038623ms" Jan 16 23:59:39.836805 containerd[1472]: time="2026-01-16T23:59:39.835918321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 665.109286ms" Jan 16 23:59:39.946529 containerd[1472]: time="2026-01-16T23:59:39.946293212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:39.946529 containerd[1472]: time="2026-01-16T23:59:39.946338776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:39.946529 containerd[1472]: time="2026-01-16T23:59:39.946349577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:39.946529 containerd[1472]: time="2026-01-16T23:59:39.946450266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:39.950702 containerd[1472]: time="2026-01-16T23:59:39.950477716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:39.950702 containerd[1472]: time="2026-01-16T23:59:39.950532841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:39.950702 containerd[1472]: time="2026-01-16T23:59:39.950543522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:39.950702 containerd[1472]: time="2026-01-16T23:59:39.950621969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:39.978179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944509224.mount: Deactivated successfully. Jan 16 23:59:40.029519 systemd[1]: run-containerd-runc-k8s.io-abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b-runc.zFqGkv.mount: Deactivated successfully. Jan 16 23:59:40.037277 systemd[1]: Started cri-containerd-abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b.scope - libcontainer container abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b. Jan 16 23:59:40.042987 systemd[1]: Started cri-containerd-4daa7caa65ff2871f48fd3cbb499cb3f0b2fd23e5ffc7a2bfab9907b357ff1b3.scope - libcontainer container 4daa7caa65ff2871f48fd3cbb499cb3f0b2fd23e5ffc7a2bfab9907b357ff1b3. Jan 16 23:59:40.079330 containerd[1472]: time="2026-01-16T23:59:40.078933949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmq49,Uid:d9699256-4015-4bbc-8333-76427ca5ab87,Namespace:calico-system,Attempt:0,} returns sandbox id \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\"" Jan 16 23:59:40.083556 containerd[1472]: time="2026-01-16T23:59:40.083354934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 23:59:40.085387 containerd[1472]: time="2026-01-16T23:59:40.085193055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8m9h,Uid:4f88121c-f911-41e5-af7b-5ef9149d724d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4daa7caa65ff2871f48fd3cbb499cb3f0b2fd23e5ffc7a2bfab9907b357ff1b3\"" Jan 16 23:59:40.836810 kubelet[1833]: E0116 23:59:40.836741 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:40.927224 kubelet[1833]: E0116 23:59:40.926133 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:41.677715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849903001.mount: Deactivated successfully. Jan 16 23:59:41.752906 containerd[1472]: time="2026-01-16T23:59:41.752833521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:41.754128 containerd[1472]: time="2026-01-16T23:59:41.754076504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 16 23:59:41.754923 containerd[1472]: time="2026-01-16T23:59:41.754824006Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:41.757476 containerd[1472]: time="2026-01-16T23:59:41.757426662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:41.758236 containerd[1472]: time="2026-01-16T23:59:41.758205326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.674802268s" Jan 16 23:59:41.758459 containerd[1472]: time="2026-01-16T23:59:41.758327257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 23:59:41.761029 containerd[1472]: time="2026-01-16T23:59:41.760985197Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 23:59:41.761411 containerd[1472]: time="2026-01-16T23:59:41.761368829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 23:59:41.777906 containerd[1472]: time="2026-01-16T23:59:41.777827234Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605\"" Jan 16 23:59:41.780788 containerd[1472]: time="2026-01-16T23:59:41.778698506Z" level=info msg="StartContainer for \"6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605\"" Jan 16 23:59:41.815018 systemd[1]: Started cri-containerd-6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605.scope - libcontainer container 6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605. Jan 16 23:59:41.837171 kubelet[1833]: E0116 23:59:41.837125 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:41.847453 containerd[1472]: time="2026-01-16T23:59:41.846824955Z" level=info msg="StartContainer for \"6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605\" returns successfully" Jan 16 23:59:41.857295 systemd[1]: cri-containerd-6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605.scope: Deactivated successfully. Jan 16 23:59:41.883268 update_engine[1463]: I20260116 23:59:41.883219 1463 update_attempter.cc:509] Updating boot flags... Jan 16 23:59:41.899847 containerd[1472]: time="2026-01-16T23:59:41.899355831Z" level=info msg="shim disconnected" id=6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605 namespace=k8s.io Jan 16 23:59:41.899847 containerd[1472]: time="2026-01-16T23:59:41.899414836Z" level=warning msg="cleaning up after shim disconnected" id=6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605 namespace=k8s.io Jan 16 23:59:41.899847 containerd[1472]: time="2026-01-16T23:59:41.899428517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:59:41.946025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2042) Jan 16 23:59:42.029382 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2029) Jan 16 23:59:42.088032 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2029) Jan 16 23:59:42.646130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eb312efdd7179d7baf2eff638d97b28499aefdf172bf8e38330cd2b6a816605-rootfs.mount: Deactivated successfully. Jan 16 23:59:42.808308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706712455.mount: Deactivated successfully. Jan 16 23:59:42.837359 kubelet[1833]: E0116 23:59:42.837287 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:42.927146 kubelet[1833]: E0116 23:59:42.926677 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:43.155921 containerd[1472]: time="2026-01-16T23:59:43.155227389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.156908 containerd[1472]: time="2026-01-16T23:59:43.156868032Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 16 23:59:43.158898 containerd[1472]: time="2026-01-16T23:59:43.157891669Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.161060 containerd[1472]: time="2026-01-16T23:59:43.161012104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.163265 containerd[1472]: time="2026-01-16T23:59:43.163034416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.401603142s" Jan 16 23:59:43.163265 containerd[1472]: time="2026-01-16T23:59:43.163100541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 23:59:43.166293 containerd[1472]: time="2026-01-16T23:59:43.165814065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 23:59:43.166706 containerd[1472]: time="2026-01-16T23:59:43.166558081Z" level=info msg="CreateContainer within sandbox \"4daa7caa65ff2871f48fd3cbb499cb3f0b2fd23e5ffc7a2bfab9907b357ff1b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 23:59:43.182831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515697309.mount: Deactivated successfully. Jan 16 23:59:43.190465 containerd[1472]: time="2026-01-16T23:59:43.190310465Z" level=info msg="CreateContainer within sandbox \"4daa7caa65ff2871f48fd3cbb499cb3f0b2fd23e5ffc7a2bfab9907b357ff1b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48bd9829cb751fe01173537eed34a731203daee9d1b19bcee60340482652d946\"" Jan 16 23:59:43.192901 containerd[1472]: time="2026-01-16T23:59:43.191374785Z" level=info msg="StartContainer for \"48bd9829cb751fe01173537eed34a731203daee9d1b19bcee60340482652d946\"" Jan 16 23:59:43.223109 systemd[1]: Started cri-containerd-48bd9829cb751fe01173537eed34a731203daee9d1b19bcee60340482652d946.scope - libcontainer container 48bd9829cb751fe01173537eed34a731203daee9d1b19bcee60340482652d946. Jan 16 23:59:43.252425 containerd[1472]: time="2026-01-16T23:59:43.252303043Z" level=info msg="StartContainer for \"48bd9829cb751fe01173537eed34a731203daee9d1b19bcee60340482652d946\" returns successfully" Jan 16 23:59:43.837520 kubelet[1833]: E0116 23:59:43.837454 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:44.837909 kubelet[1833]: E0116 23:59:44.837824 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:44.925232 kubelet[1833]: E0116 23:59:44.925178 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:45.687186 containerd[1472]: time="2026-01-16T23:59:45.687136073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:45.689355 containerd[1472]: time="2026-01-16T23:59:45.689312101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 16 23:59:45.691049 containerd[1472]: time="2026-01-16T23:59:45.690682955Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:45.693444 containerd[1472]: time="2026-01-16T23:59:45.693344457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:45.696689 containerd[1472]: time="2026-01-16T23:59:45.696309659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.530443791s" Jan 16 23:59:45.696689 containerd[1472]: time="2026-01-16T23:59:45.696391825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 23:59:45.700283 containerd[1472]: time="2026-01-16T23:59:45.700236567Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 23:59:45.716438 containerd[1472]: time="2026-01-16T23:59:45.716357188Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333\"" Jan 16 23:59:45.717250 containerd[1472]: time="2026-01-16T23:59:45.717187525Z" level=info msg="StartContainer for \"457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333\"" Jan 16 23:59:45.749175 systemd[1]: Started cri-containerd-457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333.scope - libcontainer container 457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333. Jan 16 23:59:45.782398 containerd[1472]: time="2026-01-16T23:59:45.782359575Z" level=info msg="StartContainer for \"457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333\" returns successfully" Jan 16 23:59:45.839073 kubelet[1833]: E0116 23:59:45.839011 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:45.988599 kubelet[1833]: I0116 23:59:45.988108 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8m9h" podStartSLOduration=5.909636517 podStartE2EDuration="8.988088465s" podCreationTimestamp="2026-01-16 23:59:37 +0000 UTC" firstStartedPulling="2026-01-16 23:59:40.086705907 +0000 UTC m=+4.358814126" lastFinishedPulling="2026-01-16 23:59:43.165157855 +0000 UTC m=+7.437266074" observedRunningTime="2026-01-16 23:59:43.965993106 +0000 UTC m=+8.238101405" watchObservedRunningTime="2026-01-16 23:59:45.988088465 +0000 UTC m=+10.260196644" Jan 16 23:59:46.287167 containerd[1472]: time="2026-01-16T23:59:46.287011549Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:59:46.290890 systemd[1]: cri-containerd-457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333.scope: Deactivated successfully. Jan 16 23:59:46.315919 kubelet[1833]: I0116 23:59:46.315401 1833 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 23:59:46.316312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333-rootfs.mount: Deactivated successfully. Jan 16 23:59:46.455692 containerd[1472]: time="2026-01-16T23:59:46.455606339Z" level=info msg="shim disconnected" id=457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333 namespace=k8s.io Jan 16 23:59:46.456285 containerd[1472]: time="2026-01-16T23:59:46.456058489Z" level=warning msg="cleaning up after shim disconnected" id=457881563551e41be6eae3a53543e9df6abfd8ffdbfa3ce35c2a1ebf8aa65333 namespace=k8s.io Jan 16 23:59:46.456285 containerd[1472]: time="2026-01-16T23:59:46.456083850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 23:59:46.469735 containerd[1472]: time="2026-01-16T23:59:46.469644254Z" level=warning msg="cleanup warnings time=\"2026-01-16T23:59:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 23:59:46.840192 kubelet[1833]: E0116 23:59:46.840108 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:46.935618 systemd[1]: Created slice kubepods-besteffort-podac0c5618_fb0f_49f6_8265_d8bd0ced516e.slice - libcontainer container kubepods-besteffort-podac0c5618_fb0f_49f6_8265_d8bd0ced516e.slice. Jan 16 23:59:46.938687 containerd[1472]: time="2026-01-16T23:59:46.938644546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch2ph,Uid:ac0c5618-fb0f-49f6-8265-d8bd0ced516e,Namespace:calico-system,Attempt:0,}" Jan 16 23:59:46.978243 containerd[1472]: time="2026-01-16T23:59:46.978193724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 23:59:47.033755 containerd[1472]: time="2026-01-16T23:59:47.033694726Z" level=error msg="Failed to destroy network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:59:47.034157 containerd[1472]: time="2026-01-16T23:59:47.034111992Z" level=error msg="encountered an error cleaning up failed sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:59:47.034214 containerd[1472]: time="2026-01-16T23:59:47.034181436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch2ph,Uid:ac0c5618-fb0f-49f6-8265-d8bd0ced516e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:59:47.035473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783-shm.mount: Deactivated successfully. Jan 16 23:59:47.036203 kubelet[1833]: E0116 23:59:47.035759 1833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:59:47.036203 kubelet[1833]: E0116 23:59:47.035846 1833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:47.036203 kubelet[1833]: E0116 23:59:47.035891 1833 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ch2ph" Jan 16 23:59:47.036383 kubelet[1833]: E0116 23:59:47.035942 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:47.841227 kubelet[1833]: E0116 23:59:47.841152 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:47.978374 kubelet[1833]: I0116 23:59:47.978311 1833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:47.979394 containerd[1472]: time="2026-01-16T23:59:47.979298052Z" level=info msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" Jan 16 23:59:47.979976 containerd[1472]: time="2026-01-16T23:59:47.979582590Z" level=info msg="Ensure that sandbox 7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783 in task-service has been cleanup successfully" Jan 16 23:59:48.007365 containerd[1472]: time="2026-01-16T23:59:48.007176253Z" level=error msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" failed" error="failed to destroy network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 23:59:48.007613 kubelet[1833]: E0116 23:59:48.007520 1833 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:48.007721 kubelet[1833]: E0116 23:59:48.007602 1833 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783"} Jan 16 23:59:48.007721 kubelet[1833]: E0116 23:59:48.007677 1833 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 16 23:59:48.007920 kubelet[1833]: E0116 23:59:48.007713 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac0c5618-fb0f-49f6-8265-d8bd0ced516e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 16 23:59:48.842214 kubelet[1833]: E0116 23:59:48.842166 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:49.842425 kubelet[1833]: E0116 23:59:49.842369 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:50.843190 kubelet[1833]: E0116 23:59:50.843117 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:51.000073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087644051.mount: Deactivated successfully. Jan 16 23:59:51.027522 containerd[1472]: time="2026-01-16T23:59:51.027387903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:51.028730 containerd[1472]: time="2026-01-16T23:59:51.028651690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 16 23:59:51.029615 containerd[1472]: time="2026-01-16T23:59:51.029561777Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:51.032352 containerd[1472]: time="2026-01-16T23:59:51.032291120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:51.033375 containerd[1472]: time="2026-01-16T23:59:51.032897512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.05457902s" Jan 16 23:59:51.033375 containerd[1472]: time="2026-01-16T23:59:51.032935394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 23:59:51.046168 containerd[1472]: time="2026-01-16T23:59:51.046089122Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 23:59:51.063074 containerd[1472]: time="2026-01-16T23:59:51.063006168Z" level=info msg="CreateContainer within sandbox \"abfd685804bd43f9e12094585bb3e57280d446cbe0080bdc991ed0b86567c14b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd\"" Jan 16 23:59:51.064090 containerd[1472]: time="2026-01-16T23:59:51.064015140Z" level=info msg="StartContainer for \"77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd\"" Jan 16 23:59:51.096310 systemd[1]: Started cri-containerd-77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd.scope - libcontainer container 77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd. Jan 16 23:59:51.131094 containerd[1472]: time="2026-01-16T23:59:51.130322371Z" level=info msg="StartContainer for \"77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd\" returns successfully" Jan 16 23:59:51.271958 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 23:59:51.272089 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 23:59:51.844293 kubelet[1833]: E0116 23:59:51.844228 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:52.018762 kubelet[1833]: I0116 23:59:52.018689 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pmq49" podStartSLOduration=4.066762831 podStartE2EDuration="15.018667668s" podCreationTimestamp="2026-01-16 23:59:37 +0000 UTC" firstStartedPulling="2026-01-16 23:59:40.081745954 +0000 UTC m=+4.353854173" lastFinishedPulling="2026-01-16 23:59:51.033650791 +0000 UTC m=+15.305759010" observedRunningTime="2026-01-16 23:59:52.017593854 +0000 UTC m=+16.289702153" watchObservedRunningTime="2026-01-16 23:59:52.018667668 +0000 UTC m=+16.290775887" Jan 16 23:59:52.556134 systemd[1]: Created slice kubepods-besteffort-pod775fb6df_2bf0_4697_a005_ba06222ec659.slice - libcontainer container kubepods-besteffort-pod775fb6df_2bf0_4697_a005_ba06222ec659.slice. Jan 16 23:59:52.557361 kubelet[1833]: I0116 23:59:52.557269 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m7n\" (UniqueName: \"kubernetes.io/projected/775fb6df-2bf0-4697-a005-ba06222ec659-kube-api-access-m7m7n\") pod \"nginx-deployment-7fcdb87857-pfwzz\" (UID: \"775fb6df-2bf0-4697-a005-ba06222ec659\") " pod="default/nginx-deployment-7fcdb87857-pfwzz" Jan 16 23:59:52.845574 kubelet[1833]: E0116 23:59:52.845204 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:52.860187 containerd[1472]: time="2026-01-16T23:59:52.860143535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pfwzz,Uid:775fb6df-2bf0-4697-a005-ba06222ec659,Namespace:default,Attempt:0,}" Jan 16 23:59:52.995147 kubelet[1833]: I0116 23:59:52.994456 1833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 23:59:53.041893 kernel: bpftool[2548]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 16 23:59:53.110232 systemd-networkd[1382]: cali936d5a01f7b: Link UP Jan 16 23:59:53.112404 systemd-networkd[1382]: cali936d5a01f7b: Gained carrier Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:52.913 [INFO][2500] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:52.944 [INFO][2500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0 nginx-deployment-7fcdb87857- default 775fb6df-2bf0-4697-a005-ba06222ec659 1541 0 2026-01-16 23:59:52 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-7fcdb87857-pfwzz eth0 default [] [] [kns.default ksa.default.default] cali936d5a01f7b [] [] }} ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:52.945 [INFO][2500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.032 [INFO][2515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" HandleID="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.032 [INFO][2515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" HandleID="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338e20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-7fcdb87857-pfwzz", "timestamp":"2026-01-16 23:59:53.032366404 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.032 [INFO][2515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.032 [INFO][2515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.033 [INFO][2515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.049 [INFO][2515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.057 [INFO][2515] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.066 [INFO][2515] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.070 [INFO][2515] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.073 [INFO][2515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.073 [INFO][2515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.078 [INFO][2515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45 Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.085 [INFO][2515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.096 [INFO][2515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.096 [INFO][2515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" host="10.0.0.4" Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.096 [INFO][2515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:59:53.129049 containerd[1472]: 2026-01-16 23:59:53.096 [INFO][2515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" HandleID="k8s-pod-network.47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Workload="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.099 [INFO][2500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"775fb6df-2bf0-4697-a005-ba06222ec659", ResourceVersion:"1541", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-pfwzz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali936d5a01f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.099 [INFO][2500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.193/32] ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.099 [INFO][2500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali936d5a01f7b ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.111 [INFO][2500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.113 [INFO][2500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"775fb6df-2bf0-4697-a005-ba06222ec659", ResourceVersion:"1541", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45", Pod:"nginx-deployment-7fcdb87857-pfwzz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali936d5a01f7b", MAC:"ee:e7:a0:ef:59:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:59:53.129750 containerd[1472]: 2026-01-16 23:59:53.124 [INFO][2500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45" Namespace="default" Pod="nginx-deployment-7fcdb87857-pfwzz" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--7fcdb87857--pfwzz-eth0" Jan 16 23:59:53.151756 containerd[1472]: time="2026-01-16T23:59:53.151495432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:53.151756 containerd[1472]: time="2026-01-16T23:59:53.151609918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:53.151756 containerd[1472]: time="2026-01-16T23:59:53.151654640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:53.152096 containerd[1472]: time="2026-01-16T23:59:53.151994496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:53.189434 systemd[1]: Started cri-containerd-47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45.scope - libcontainer container 47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45. Jan 16 23:59:53.228877 containerd[1472]: time="2026-01-16T23:59:53.228768401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pfwzz,Uid:775fb6df-2bf0-4697-a005-ba06222ec659,Namespace:default,Attempt:0,} returns sandbox id \"47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45\"" Jan 16 23:59:53.230496 containerd[1472]: time="2026-01-16T23:59:53.230406160Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 16 23:59:53.258485 systemd-networkd[1382]: vxlan.calico: Link UP Jan 16 23:59:53.258492 systemd-networkd[1382]: vxlan.calico: Gained carrier Jan 16 23:59:53.846118 kubelet[1833]: E0116 23:59:53.845979 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:54.410260 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Jan 16 23:59:54.731201 systemd-networkd[1382]: cali936d5a01f7b: Gained IPv6LL Jan 16 23:59:54.847135 kubelet[1833]: E0116 23:59:54.846993 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:55.538162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908317431.mount: Deactivated successfully. Jan 16 23:59:55.847743 kubelet[1833]: E0116 23:59:55.847545 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:56.233000 containerd[1472]: time="2026-01-16T23:59:56.232943349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:56.234215 containerd[1472]: time="2026-01-16T23:59:56.234175002Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=62401393" Jan 16 23:59:56.236759 containerd[1472]: time="2026-01-16T23:59:56.234927394Z" level=info msg="ImageCreate event name:\"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:56.237890 containerd[1472]: time="2026-01-16T23:59:56.237718274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:56.239601 containerd[1472]: time="2026-01-16T23:59:56.238768039Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"62401271\" in 3.008328998s" Jan 16 23:59:56.239601 containerd[1472]: time="2026-01-16T23:59:56.238805681Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\"" Jan 16 23:59:56.241347 containerd[1472]: time="2026-01-16T23:59:56.241319509Z" level=info msg="CreateContainer within sandbox \"47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 16 23:59:56.256513 containerd[1472]: time="2026-01-16T23:59:56.256470521Z" level=info msg="CreateContainer within sandbox \"47b2725340d1c9dbfb5a57b3350478a59a648c8c7b1bb9a0bd2facb94ddcaa45\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8a5f17e099f1712345d6bff72f4df343c4bab7a09ddfd532c6db57f9c31c2b95\"" Jan 16 23:59:56.257654 containerd[1472]: time="2026-01-16T23:59:56.257419522Z" level=info msg="StartContainer for \"8a5f17e099f1712345d6bff72f4df343c4bab7a09ddfd532c6db57f9c31c2b95\"" Jan 16 23:59:56.289059 systemd[1]: Started cri-containerd-8a5f17e099f1712345d6bff72f4df343c4bab7a09ddfd532c6db57f9c31c2b95.scope - libcontainer container 8a5f17e099f1712345d6bff72f4df343c4bab7a09ddfd532c6db57f9c31c2b95. Jan 16 23:59:56.322443 containerd[1472]: time="2026-01-16T23:59:56.322373077Z" level=info msg="StartContainer for \"8a5f17e099f1712345d6bff72f4df343c4bab7a09ddfd532c6db57f9c31c2b95\" returns successfully" Jan 16 23:59:56.829397 kubelet[1833]: E0116 23:59:56.829311 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:56.848514 kubelet[1833]: E0116 23:59:56.848437 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:57.849695 kubelet[1833]: E0116 23:59:57.849625 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:58.850239 kubelet[1833]: E0116 23:59:58.850188 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 23:59:58.926910 containerd[1472]: time="2026-01-16T23:59:58.926398202Z" level=info msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" Jan 16 23:59:59.017909 kubelet[1833]: I0116 23:59:59.017443 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-pfwzz" podStartSLOduration=4.007190944 podStartE2EDuration="7.017412627s" podCreationTimestamp="2026-01-16 23:59:52 +0000 UTC" firstStartedPulling="2026-01-16 23:59:53.229836252 +0000 UTC m=+17.501944471" lastFinishedPulling="2026-01-16 23:59:56.240057935 +0000 UTC m=+20.512166154" observedRunningTime="2026-01-16 23:59:57.020294243 +0000 UTC m=+21.292402462" watchObservedRunningTime="2026-01-16 23:59:59.017412627 +0000 UTC m=+23.289520886" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.019 [INFO][2774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.019 [INFO][2774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" iface="eth0" netns="/var/run/netns/cni-46039f24-76e3-bda5-3ddf-6a8101c8c6fc" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.020 [INFO][2774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" iface="eth0" netns="/var/run/netns/cni-46039f24-76e3-bda5-3ddf-6a8101c8c6fc" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.020 [INFO][2774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" iface="eth0" netns="/var/run/netns/cni-46039f24-76e3-bda5-3ddf-6a8101c8c6fc" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.020 [INFO][2774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.020 [INFO][2774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.040 [INFO][2782] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.041 [INFO][2782] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.041 [INFO][2782] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.054 [WARNING][2782] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.054 [INFO][2782] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.056 [INFO][2782] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:59:59.063159 containerd[1472]: 2026-01-16 23:59:59.059 [INFO][2774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 16 23:59:59.065643 containerd[1472]: time="2026-01-16T23:59:59.065415366Z" level=info msg="TearDown network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" successfully" Jan 16 23:59:59.065643 containerd[1472]: time="2026-01-16T23:59:59.065499849Z" level=info msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" returns successfully" Jan 16 23:59:59.066620 containerd[1472]: time="2026-01-16T23:59:59.066477807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch2ph,Uid:ac0c5618-fb0f-49f6-8265-d8bd0ced516e,Namespace:calico-system,Attempt:1,}" Jan 16 23:59:59.069063 systemd[1]: run-netns-cni\x2d46039f24\x2d76e3\x2dbda5\x2d3ddf\x2d6a8101c8c6fc.mount: Deactivated successfully. Jan 16 23:59:59.234447 systemd-networkd[1382]: calie5196a145ab: Link UP Jan 16 23:59:59.236036 systemd-networkd[1382]: calie5196a145ab: Gained carrier Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.132 [INFO][2789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--ch2ph-eth0 csi-node-driver- calico-system ac0c5618-fb0f-49f6-8265-d8bd0ced516e 1580 0 2026-01-16 23:59:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-ch2ph eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie5196a145ab [] [] }} ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.133 [INFO][2789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.162 [INFO][2801] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" HandleID="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.162 [INFO][2801] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" HandleID="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-ch2ph", "timestamp":"2026-01-16 23:59:59.162247677 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.162 [INFO][2801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.162 [INFO][2801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.162 [INFO][2801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.180 [INFO][2801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.187 [INFO][2801] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.195 [INFO][2801] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.199 [INFO][2801] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.202 [INFO][2801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.202 [INFO][2801] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.206 [INFO][2801] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4 Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.212 [INFO][2801] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.222 [INFO][2801] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.222 [INFO][2801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" host="10.0.0.4" Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.222 [INFO][2801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 23:59:59.251911 containerd[1472]: 2026-01-16 23:59:59.222 [INFO][2801] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" HandleID="k8s-pod-network.e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.225 [INFO][2789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--ch2ph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac0c5618-fb0f-49f6-8265-d8bd0ced516e", ResourceVersion:"1580", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-ch2ph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5196a145ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.226 [INFO][2789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.194/32] ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.226 [INFO][2789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5196a145ab ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.235 [INFO][2789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.237 [INFO][2789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--ch2ph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac0c5618-fb0f-49f6-8265-d8bd0ced516e", ResourceVersion:"1580", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4", Pod:"csi-node-driver-ch2ph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5196a145ab", MAC:"56:01:a2:8d:8a:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 23:59:59.255763 containerd[1472]: 2026-01-16 23:59:59.249 [INFO][2789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4" Namespace="calico-system" Pod="csi-node-driver-ch2ph" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 16 23:59:59.273698 containerd[1472]: time="2026-01-16T23:59:59.273456385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:59.273698 containerd[1472]: time="2026-01-16T23:59:59.273511227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:59.273698 containerd[1472]: time="2026-01-16T23:59:59.273528867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:59.273698 containerd[1472]: time="2026-01-16T23:59:59.273611671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:59.301240 systemd[1]: Started cri-containerd-e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4.scope - libcontainer container e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4. Jan 16 23:59:59.334340 containerd[1472]: time="2026-01-16T23:59:59.334301661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ch2ph,Uid:ac0c5618-fb0f-49f6-8265-d8bd0ced516e,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4\"" Jan 16 23:59:59.338872 containerd[1472]: time="2026-01-16T23:59:59.338621549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 23:59:59.686556 containerd[1472]: time="2026-01-16T23:59:59.686312497Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 16 23:59:59.688128 containerd[1472]: time="2026-01-16T23:59:59.688001842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 23:59:59.688128 containerd[1472]: time="2026-01-16T23:59:59.688080725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 16 23:59:59.688958 kubelet[1833]: E0116 23:59:59.688402 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:59.688958 kubelet[1833]: E0116 23:59:59.688462 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 23:59:59.689103 kubelet[1833]: E0116 23:59:59.688669 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 23:59:59.691215 containerd[1472]: time="2026-01-16T23:59:59.690921395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 23:59:59.850586 kubelet[1833]: E0116 23:59:59.850543 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:00.026008 containerd[1472]: time="2026-01-17T00:00:00.025445963Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:00.027939 containerd[1472]: time="2026-01-17T00:00:00.027016822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:00:00.027939 containerd[1472]: time="2026-01-17T00:00:00.027219070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:00:00.028110 kubelet[1833]: E0117 00:00:00.027454 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:00.028110 kubelet[1833]: E0117 00:00:00.027513 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:00.028110 kubelet[1833]: E0117 00:00:00.027687 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:00.029300 kubelet[1833]: E0117 00:00:00.029132 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:00:00.851691 kubelet[1833]: E0117 00:00:00.851553 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:01.023306 kubelet[1833]: E0117 00:00:01.023192 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:00:01.258846 systemd-networkd[1382]: calie5196a145ab: Gained IPv6LL Jan 17 00:00:01.852262 kubelet[1833]: E0117 00:00:01.852178 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:02.853146 kubelet[1833]: E0117 00:00:02.853074 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:03.853745 kubelet[1833]: E0117 00:00:03.853684 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:03.914698 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:03.924012 systemd[1]: Created slice kubepods-besteffort-pod00dfbe37_ee26_4168_8557_c81368a8453f.slice - libcontainer container kubepods-besteffort-pod00dfbe37_ee26_4168_8557_c81368a8453f.slice. Jan 17 00:00:03.924407 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:03.936378 kubelet[1833]: I0117 00:00:03.936254 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbs4w\" (UniqueName: \"kubernetes.io/projected/00dfbe37-ee26-4168-8557-c81368a8453f-kube-api-access-sbs4w\") pod \"nfs-server-provisioner-0\" (UID: \"00dfbe37-ee26-4168-8557-c81368a8453f\") " pod="default/nfs-server-provisioner-0" Jan 17 00:00:03.936378 kubelet[1833]: I0117 00:00:03.936315 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/00dfbe37-ee26-4168-8557-c81368a8453f-data\") pod \"nfs-server-provisioner-0\" (UID: \"00dfbe37-ee26-4168-8557-c81368a8453f\") " pod="default/nfs-server-provisioner-0" Jan 17 00:00:04.231260 containerd[1472]: time="2026-01-17T00:00:04.231206177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00dfbe37-ee26-4168-8557-c81368a8453f,Namespace:default,Attempt:0,}" Jan 17 00:00:04.432221 systemd-networkd[1382]: cali60e51b789ff: Link UP Jan 17 00:00:04.433729 systemd-networkd[1382]: cali60e51b789ff: Gained carrier Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.294 [INFO][2872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 00dfbe37-ee26-4168-8557-c81368a8453f 1630 0 2026-01-17 00:00:03 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.294 [INFO][2872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.336 [INFO][2887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" HandleID="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.336 [INFO][2887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" HandleID="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-17 00:00:04.336085497 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.336 [INFO][2887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.336 [INFO][2887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.336 [INFO][2887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.377 [INFO][2887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.385 [INFO][2887] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.397 [INFO][2887] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.400 [INFO][2887] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.404 [INFO][2887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.404 [INFO][2887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.407 [INFO][2887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.413 [INFO][2887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.424 [INFO][2887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.424 [INFO][2887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" host="10.0.0.4" Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.424 [INFO][2887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:00:04.454702 containerd[1472]: 2026-01-17 00:00:04.424 [INFO][2887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" HandleID="k8s-pod-network.dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.455472 containerd[1472]: 2026-01-17 00:00:04.427 [INFO][2872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00dfbe37-ee26-4168-8557-c81368a8453f", ResourceVersion:"1630", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:04.455472 containerd[1472]: 2026-01-17 00:00:04.427 [INFO][2872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.195/32] ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.455472 containerd[1472]: 2026-01-17 00:00:04.428 [INFO][2872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.455472 containerd[1472]: 2026-01-17 00:00:04.434 [INFO][2872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.455653 containerd[1472]: 2026-01-17 00:00:04.434 [INFO][2872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00dfbe37-ee26-4168-8557-c81368a8453f", ResourceVersion:"1630", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"8a:fc:82:5b:3d:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:04.455653 containerd[1472]: 2026-01-17 00:00:04.449 [INFO][2872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:00:04.476467 containerd[1472]: time="2026-01-17T00:00:04.476371752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:04.476467 containerd[1472]: time="2026-01-17T00:00:04.476439195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:04.476467 containerd[1472]: time="2026-01-17T00:00:04.476457915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:04.477147 containerd[1472]: time="2026-01-17T00:00:04.476655722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:04.507267 systemd[1]: Started cri-containerd-dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb.scope - libcontainer container dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb. Jan 17 00:00:04.546550 containerd[1472]: time="2026-01-17T00:00:04.546464678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00dfbe37-ee26-4168-8557-c81368a8453f,Namespace:default,Attempt:0,} returns sandbox id \"dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb\"" Jan 17 00:00:04.549235 containerd[1472]: time="2026-01-17T00:00:04.548985402Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:00:04.854793 kubelet[1833]: E0117 00:00:04.854631 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:05.802906 systemd-networkd[1382]: cali60e51b789ff: Gained IPv6LL Jan 17 00:00:05.856079 kubelet[1833]: E0117 00:00:05.855396 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:06.357064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601997603.mount: Deactivated successfully. Jan 17 00:00:06.855778 kubelet[1833]: E0117 00:00:06.855721 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:07.855898 kubelet[1833]: E0117 00:00:07.855813 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:07.870875 containerd[1472]: time="2026-01-17T00:00:07.869428560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:07.870875 containerd[1472]: time="2026-01-17T00:00:07.870836323Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Jan 17 00:00:07.871486 containerd[1472]: time="2026-01-17T00:00:07.871448062Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:07.874652 containerd[1472]: time="2026-01-17T00:00:07.874602398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:07.876084 containerd[1472]: time="2026-01-17T00:00:07.876039242Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.326903676s" Jan 17 00:00:07.876084 containerd[1472]: time="2026-01-17T00:00:07.876083924Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 17 00:00:07.879191 containerd[1472]: time="2026-01-17T00:00:07.879158858Z" level=info msg="CreateContainer within sandbox \"dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:00:07.894848 containerd[1472]: time="2026-01-17T00:00:07.894792456Z" level=info msg="CreateContainer within sandbox \"dc5823efca60a35f5e64beba220c4078207e8410b3bfcd2ca80726c6e2b72ecb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151\"" Jan 17 00:00:07.895717 containerd[1472]: time="2026-01-17T00:00:07.895691244Z" level=info msg="StartContainer for \"048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151\"" Jan 17 00:00:07.923561 systemd[1]: run-containerd-runc-k8s.io-048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151-runc.FhLyUL.mount: Deactivated successfully. Jan 17 00:00:07.933143 systemd[1]: Started cri-containerd-048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151.scope - libcontainer container 048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151. Jan 17 00:00:07.960658 containerd[1472]: time="2026-01-17T00:00:07.960201379Z" level=info msg="StartContainer for \"048e707a3cc1e33ec6da43797a8915fe6e13d4201e65f041c9b7ab7506c8e151\" returns successfully" Jan 17 00:00:08.754369 kubelet[1833]: I0117 00:00:08.754094 1833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:00:08.856646 kubelet[1833]: E0117 00:00:08.856586 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:08.866269 kubelet[1833]: I0117 00:00:08.866183 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.536815355 podStartE2EDuration="5.866159067s" podCreationTimestamp="2026-01-17 00:00:03 +0000 UTC" firstStartedPulling="2026-01-17 00:00:04.5483355 +0000 UTC m=+28.820443719" lastFinishedPulling="2026-01-17 00:00:07.877679212 +0000 UTC m=+32.149787431" observedRunningTime="2026-01-17 00:00:08.058069173 +0000 UTC m=+32.330177432" watchObservedRunningTime="2026-01-17 00:00:08.866159067 +0000 UTC m=+33.138267326" Jan 17 00:00:09.857178 kubelet[1833]: E0117 00:00:09.857070 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:10.858089 kubelet[1833]: E0117 00:00:10.858015 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:11.858304 kubelet[1833]: E0117 00:00:11.858221 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:12.860034 kubelet[1833]: E0117 00:00:12.859962 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:12.930255 containerd[1472]: time="2026-01-17T00:00:12.930178383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:00:13.860613 kubelet[1833]: E0117 00:00:13.860538 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:14.861037 kubelet[1833]: E0117 00:00:14.860953 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:15.862082 kubelet[1833]: E0117 00:00:15.862007 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:16.828880 kubelet[1833]: E0117 00:00:16.828794 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:16.862709 kubelet[1833]: E0117 00:00:16.862645 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:17.300723 containerd[1472]: time="2026-01-17T00:00:17.300608251Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:17.302114 containerd[1472]: time="2026-01-17T00:00:17.302060287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:00:17.302259 containerd[1472]: time="2026-01-17T00:00:17.302198491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:00:17.302459 kubelet[1833]: E0117 00:00:17.302367 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:17.302459 kubelet[1833]: E0117 00:00:17.302446 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:17.302653 kubelet[1833]: E0117 00:00:17.302603 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:17.305365 containerd[1472]: time="2026-01-17T00:00:17.305322529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:00:17.683117 systemd[1]: Created slice kubepods-besteffort-podd6233d5e_d986_4be3_a15a_fcaa02570168.slice - libcontainer container kubepods-besteffort-podd6233d5e_d986_4be3_a15a_fcaa02570168.slice. Jan 17 00:00:17.830789 kubelet[1833]: I0117 00:00:17.830300 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26ns\" (UniqueName: \"kubernetes.io/projected/d6233d5e-d986-4be3-a15a-fcaa02570168-kube-api-access-m26ns\") pod \"test-pod-1\" (UID: \"d6233d5e-d986-4be3-a15a-fcaa02570168\") " pod="default/test-pod-1" Jan 17 00:00:17.830789 kubelet[1833]: I0117 00:00:17.830390 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2bff43c-70fa-44a0-9ddf-d67466338610\" (UniqueName: \"kubernetes.io/nfs/d6233d5e-d986-4be3-a15a-fcaa02570168-pvc-e2bff43c-70fa-44a0-9ddf-d67466338610\") pod \"test-pod-1\" (UID: \"d6233d5e-d986-4be3-a15a-fcaa02570168\") " pod="default/test-pod-1" Jan 17 00:00:17.863556 kubelet[1833]: E0117 00:00:17.863475 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:17.956878 kernel: FS-Cache: Loaded Jan 17 00:00:17.982232 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:00:17.982351 kernel: RPC: Registered udp transport module. Jan 17 00:00:17.982372 kernel: RPC: Registered tcp transport module. Jan 17 00:00:17.983214 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:00:17.983266 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:00:18.165284 kernel: NFS: Registering the id_resolver key type Jan 17 00:00:18.165607 kernel: Key type id_resolver registered Jan 17 00:00:18.165707 kernel: Key type id_legacy registered Jan 17 00:00:18.189192 nfsidmap[3112]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:00:18.193192 nfsidmap[3113]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:00:18.288150 containerd[1472]: time="2026-01-17T00:00:18.287249031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6233d5e-d986-4be3-a15a-fcaa02570168,Namespace:default,Attempt:0,}" Jan 17 00:00:18.588382 systemd-networkd[1382]: cali5ec59c6bf6e: Link UP Jan 17 00:00:18.589985 systemd-networkd[1382]: cali5ec59c6bf6e: Gained carrier Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.372 [INFO][3118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default d6233d5e-d986-4be3-a15a-fcaa02570168 1708 0 2026-01-17 00:00:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.372 [INFO][3118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.413 [INFO][3126] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" HandleID="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.413 [INFO][3126] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" HandleID="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000254fe0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2026-01-17 00:00:18.413590448 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.413 [INFO][3126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.413 [INFO][3126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.413 [INFO][3126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.452 [INFO][3126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.462 [INFO][3126] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.476 [INFO][3126] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.498 [INFO][3126] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.504 [INFO][3126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.504 [INFO][3126] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.509 [INFO][3126] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7 Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.536 [INFO][3126] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.578 [INFO][3126] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.578 [INFO][3126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" host="10.0.0.4" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.578 [INFO][3126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.578 [INFO][3126] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" HandleID="k8s-pod-network.5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.618134 containerd[1472]: 2026-01-17 00:00:18.581 [INFO][3118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d6233d5e-d986-4be3-a15a-fcaa02570168", ResourceVersion:"1708", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:18.619128 containerd[1472]: 2026-01-17 00:00:18.581 [INFO][3118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.196/32] ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.619128 containerd[1472]: 2026-01-17 00:00:18.581 [INFO][3118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.619128 containerd[1472]: 2026-01-17 00:00:18.588 [INFO][3118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.619128 containerd[1472]: 2026-01-17 00:00:18.590 [INFO][3118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d6233d5e-d986-4be3-a15a-fcaa02570168", ResourceVersion:"1708", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e2:b8:38:1b:99:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:18.619128 containerd[1472]: 2026-01-17 00:00:18.613 [INFO][3118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 17 00:00:18.644778 containerd[1472]: time="2026-01-17T00:00:18.644535349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:18.644778 containerd[1472]: time="2026-01-17T00:00:18.644599030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:18.644778 containerd[1472]: time="2026-01-17T00:00:18.644610230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:18.644778 containerd[1472]: time="2026-01-17T00:00:18.644702753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:18.664289 systemd[1]: Started cri-containerd-5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7.scope - libcontainer container 5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7. Jan 17 00:00:18.704929 containerd[1472]: time="2026-01-17T00:00:18.704847267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d6233d5e-d986-4be3-a15a-fcaa02570168,Namespace:default,Attempt:0,} returns sandbox id \"5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7\"" Jan 17 00:00:18.863941 kubelet[1833]: E0117 00:00:18.863696 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:19.863934 kubelet[1833]: E0117 00:00:19.863878 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:20.394126 systemd-networkd[1382]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 00:00:20.864773 kubelet[1833]: E0117 00:00:20.864567 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:21.865213 kubelet[1833]: E0117 00:00:21.865113 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:22.866347 kubelet[1833]: E0117 00:00:22.866204 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:23.866955 kubelet[1833]: E0117 00:00:23.866896 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:23.983268 containerd[1472]: time="2026-01-17T00:00:23.983172745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:23.984774 containerd[1472]: time="2026-01-17T00:00:23.984714620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:00:23.985041 containerd[1472]: time="2026-01-17T00:00:23.984828462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:00:23.985121 kubelet[1833]: E0117 00:00:23.985004 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:23.985121 kubelet[1833]: E0117 00:00:23.985054 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:23.985319 kubelet[1833]: E0117 00:00:23.985265 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:23.986576 containerd[1472]: time="2026-01-17T00:00:23.986164613Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:00:23.986803 kubelet[1833]: E0117 00:00:23.986672 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:00:24.867505 kubelet[1833]: E0117 00:00:24.867326 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:25.867996 kubelet[1833]: E0117 00:00:25.867920 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:26.868601 kubelet[1833]: E0117 00:00:26.868533 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:27.869502 kubelet[1833]: E0117 00:00:27.869407 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:28.870416 kubelet[1833]: E0117 00:00:28.870343 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:29.870927 kubelet[1833]: E0117 00:00:29.870835 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:30.871136 kubelet[1833]: E0117 00:00:30.871061 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:31.026154 containerd[1472]: time="2026-01-17T00:00:31.025167148Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:31.026154 containerd[1472]: time="2026-01-17T00:00:31.025866522Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:00:31.031339 containerd[1472]: time="2026-01-17T00:00:31.031291517Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"62401271\" in 7.045072463s" Jan 17 00:00:31.031339 containerd[1472]: time="2026-01-17T00:00:31.031336998Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\"" Jan 17 00:00:31.033889 containerd[1472]: time="2026-01-17T00:00:31.033789530Z" level=info msg="CreateContainer within sandbox \"5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:00:31.052325 containerd[1472]: time="2026-01-17T00:00:31.052200239Z" level=info msg="CreateContainer within sandbox \"5ed2aeb1acef116ae37f1d6eb8624a537fdfa77ffb49dec8445e9bf637f59dc7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"04d6f9c5491ff0199c55299007e5104d1dc131672a257bba5937312626a01c2e\"" Jan 17 00:00:31.053206 containerd[1472]: time="2026-01-17T00:00:31.053162820Z" level=info msg="StartContainer for \"04d6f9c5491ff0199c55299007e5104d1dc131672a257bba5937312626a01c2e\"" Jan 17 00:00:31.086092 systemd[1]: Started cri-containerd-04d6f9c5491ff0199c55299007e5104d1dc131672a257bba5937312626a01c2e.scope - libcontainer container 04d6f9c5491ff0199c55299007e5104d1dc131672a257bba5937312626a01c2e. Jan 17 00:00:31.114490 containerd[1472]: time="2026-01-17T00:00:31.113706821Z" level=info msg="StartContainer for \"04d6f9c5491ff0199c55299007e5104d1dc131672a257bba5937312626a01c2e\" returns successfully" Jan 17 00:00:31.871933 kubelet[1833]: E0117 00:00:31.871829 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:32.872698 kubelet[1833]: E0117 00:00:32.872600 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:33.872901 kubelet[1833]: E0117 00:00:33.872770 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:34.873561 kubelet[1833]: E0117 00:00:34.873481 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:35.873699 kubelet[1833]: E0117 00:00:35.873613 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:35.926937 kubelet[1833]: E0117 00:00:35.926781 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:00:35.945617 kubelet[1833]: I0117 00:00:35.945518 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.619574739 podStartE2EDuration="30.945494214s" podCreationTimestamp="2026-01-17 00:00:05 +0000 UTC" firstStartedPulling="2026-01-17 00:00:18.70621558 +0000 UTC m=+42.978323839" lastFinishedPulling="2026-01-17 00:00:31.032135095 +0000 UTC m=+55.304243314" observedRunningTime="2026-01-17 00:00:32.140337699 +0000 UTC m=+56.412445918" watchObservedRunningTime="2026-01-17 00:00:35.945494214 +0000 UTC m=+60.217602433" Jan 17 00:00:36.829039 kubelet[1833]: E0117 00:00:36.828968 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:36.849998 containerd[1472]: time="2026-01-17T00:00:36.849726740Z" level=info msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" Jan 17 00:00:36.874521 kubelet[1833]: E0117 00:00:36.874460 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.895 [WARNING][3253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--ch2ph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac0c5618-fb0f-49f6-8265-d8bd0ced516e", ResourceVersion:"1768", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4", Pod:"csi-node-driver-ch2ph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5196a145ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.895 [INFO][3253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.895 [INFO][3253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" iface="eth0" netns="" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.896 [INFO][3253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.896 [INFO][3253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.916 [INFO][3260] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.916 [INFO][3260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.917 [INFO][3260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.933 [WARNING][3260] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.933 [INFO][3260] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.936 [INFO][3260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:00:36.941868 containerd[1472]: 2026-01-17 00:00:36.939 [INFO][3253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:36.942604 containerd[1472]: time="2026-01-17T00:00:36.941925178Z" level=info msg="TearDown network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" successfully" Jan 17 00:00:36.942604 containerd[1472]: time="2026-01-17T00:00:36.941953819Z" level=info msg="StopPodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" returns successfully" Jan 17 00:00:36.942604 containerd[1472]: time="2026-01-17T00:00:36.942516330Z" level=info msg="RemovePodSandbox for \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" Jan 17 00:00:36.942604 containerd[1472]: time="2026-01-17T00:00:36.942548571Z" level=info msg="Forcibly stopping sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\"" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:36.987 [WARNING][3276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--ch2ph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac0c5618-fb0f-49f6-8265-d8bd0ced516e", ResourceVersion:"1768", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 23, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"e1183ee03e8c4c769590be4f3ec5b9b312a53ce22a2f25fb18dbb8f8462345b4", Pod:"csi-node-driver-ch2ph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5196a145ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:36.987 [INFO][3276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:36.987 [INFO][3276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" iface="eth0" netns="" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:36.987 [INFO][3276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:36.987 [INFO][3276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.009 [INFO][3283] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.009 [INFO][3283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.009 [INFO][3283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.035 [WARNING][3283] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.036 [INFO][3283] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" HandleID="k8s-pod-network.7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Workload="10.0.0.4-k8s-csi--node--driver--ch2ph-eth0" Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.039 [INFO][3283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:00:37.045973 containerd[1472]: 2026-01-17 00:00:37.043 [INFO][3276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783" Jan 17 00:00:37.045973 containerd[1472]: time="2026-01-17T00:00:37.045338176Z" level=info msg="TearDown network for sandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" successfully" Jan 17 00:00:37.050435 containerd[1472]: time="2026-01-17T00:00:37.050112588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:00:37.050435 containerd[1472]: time="2026-01-17T00:00:37.050268276Z" level=info msg="RemovePodSandbox \"7f982bf22cc02fc3b2c5525f7ceebe4ddd58fa81e027072fdd153524bf5a9783\" returns successfully" Jan 17 00:00:37.875616 kubelet[1833]: E0117 00:00:37.875532 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:38.856435 systemd[1]: run-containerd-runc-k8s.io-77ad8fbf99069a580ce7aa413e16103cb319c5bd64aa37872d7275e0b81c1ccd-runc.9jFrCI.mount: Deactivated successfully. Jan 17 00:00:38.876609 kubelet[1833]: E0117 00:00:38.876501 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:39.877395 kubelet[1833]: E0117 00:00:39.877314 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:40.878249 kubelet[1833]: E0117 00:00:40.878169 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:41.878912 kubelet[1833]: E0117 00:00:41.878805 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:42.879583 kubelet[1833]: E0117 00:00:42.879501 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:43.880115 kubelet[1833]: E0117 00:00:43.880028 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:44.880559 kubelet[1833]: E0117 00:00:44.880504 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:45.880740 kubelet[1833]: E0117 00:00:45.880667 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:46.881358 kubelet[1833]: E0117 00:00:46.881270 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:47.881807 kubelet[1833]: E0117 00:00:47.881737 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:48.882743 kubelet[1833]: E0117 00:00:48.882637 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:49.882900 kubelet[1833]: E0117 00:00:49.882810 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:49.926659 containerd[1472]: time="2026-01-17T00:00:49.926593980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:00:50.713317 containerd[1472]: time="2026-01-17T00:00:50.713100239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:50.714792 containerd[1472]: time="2026-01-17T00:00:50.714661466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:00:50.715001 containerd[1472]: time="2026-01-17T00:00:50.714788311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:00:50.715047 kubelet[1833]: E0117 00:00:50.714990 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:50.715133 kubelet[1833]: E0117 00:00:50.715049 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:00:50.715334 kubelet[1833]: E0117 00:00:50.715278 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:50.717573 containerd[1472]: time="2026-01-17T00:00:50.717499707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:00:50.883659 kubelet[1833]: E0117 00:00:50.883579 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:51.884408 kubelet[1833]: E0117 00:00:51.884331 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:52.884577 kubelet[1833]: E0117 00:00:52.884468 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:53.746386 kubelet[1833]: E0117 00:00:53.746261 1833 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48842->10.0.0.2:2379: read: connection timed out" Jan 17 00:00:53.885575 kubelet[1833]: E0117 00:00:53.885496 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:54.681160 containerd[1472]: time="2026-01-17T00:00:54.681049272Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:00:54.682787 containerd[1472]: time="2026-01-17T00:00:54.682657816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:00:54.682970 containerd[1472]: time="2026-01-17T00:00:54.682877145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:00:54.683256 kubelet[1833]: E0117 00:00:54.683174 1833 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:54.683256 kubelet[1833]: E0117 00:00:54.683248 1833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:00:54.683575 kubelet[1833]: E0117 00:00:54.683361 1833 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st78v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ch2ph_calico-system(ac0c5618-fb0f-49f6-8265-d8bd0ced516e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:00:54.684729 kubelet[1833]: E0117 00:00:54.684610 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:00:54.886467 kubelet[1833]: E0117 00:00:54.886408 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:55.027708 kubelet[1833]: E0117 00:00:55.026462 1833 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48686->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{csi-node-driver-ch2ph.188b5b9adb6c24a7 calico-system 1680 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-ch2ph,UID:ac0c5618-fb0f-49f6-8265-d8bd0ced516e,APIVersion:v1,ResourceVersion:1418,FieldPath:spec.containers{calico-csi},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2026-01-16 23:59:59 +0000 UTC,LastTimestamp:2026-01-17 00:00:49.925528174 +0000 UTC m=+74.197636433,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 17 00:00:55.886785 kubelet[1833]: E0117 00:00:55.886720 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:56.829293 kubelet[1833]: E0117 00:00:56.829167 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:56.887474 kubelet[1833]: E0117 00:00:56.887405 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:57.887689 kubelet[1833]: E0117 00:00:57.887594 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:58.888449 kubelet[1833]: E0117 00:00:58.888365 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:00:59.889718 kubelet[1833]: E0117 00:00:59.889567 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:00.890656 kubelet[1833]: E0117 00:01:00.890583 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:01.891774 kubelet[1833]: E0117 00:01:01.891692 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:02.892677 kubelet[1833]: E0117 00:01:02.892601 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:03.746836 kubelet[1833]: E0117 00:01:03.746737 1833 controller.go:195] "Failed to update lease" err="Put \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:01:03.892919 kubelet[1833]: E0117 00:01:03.892834 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:04.894008 kubelet[1833]: E0117 00:01:04.893932 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:05.895093 kubelet[1833]: E0117 00:01:05.895012 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:06.896059 kubelet[1833]: E0117 00:01:06.895997 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:07.896293 kubelet[1833]: E0117 00:01:07.896204 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:08.896635 kubelet[1833]: E0117 00:01:08.896574 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:09.897523 kubelet[1833]: E0117 00:01:09.897421 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:09.926665 kubelet[1833]: E0117 00:01:09.926449 1833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ch2ph" podUID="ac0c5618-fb0f-49f6-8265-d8bd0ced516e" Jan 17 00:01:10.898555 kubelet[1833]: E0117 00:01:10.898402 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:11.899665 kubelet[1833]: E0117 00:01:11.899585 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:12.900481 kubelet[1833]: E0117 00:01:12.900394 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:13.748003 kubelet[1833]: E0117 00:01:13.747906 1833 controller.go:195] "Failed to update lease" err="Put \"https://46.224.42.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:01:13.900825 kubelet[1833]: E0117 00:01:13.900727 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:01:14.901802 kubelet[1833]: E0117 00:01:14.901730 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"