Apr 28 00:14:29.895988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 28 00:14:29.896017 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 27 22:49:05 -00 2026 Apr 28 00:14:29.896029 kernel: KASLR enabled Apr 28 00:14:29.896035 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 28 00:14:29.896040 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 28 00:14:29.896046 kernel: random: crng init done Apr 28 00:14:29.896053 kernel: ACPI: Early table checksum verification disabled Apr 28 00:14:29.896059 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 28 00:14:29.896066 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 28 00:14:29.896073 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896080 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896086 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896092 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896098 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896106 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896114 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896120 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896127 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:14:29.896133 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 00:14:29.896140 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 28 00:14:29.896146 kernel: NUMA: Failed to initialise from firmware Apr 28 00:14:29.896153 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 28 00:14:29.896159 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 28 00:14:29.896165 kernel: Zone ranges: Apr 28 00:14:29.896172 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 28 00:14:29.896180 kernel: DMA32 empty Apr 28 00:14:29.896186 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 28 00:14:29.896193 kernel: Movable zone start for each node Apr 28 00:14:29.896199 kernel: Early memory node ranges Apr 28 00:14:29.896205 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 28 00:14:29.896212 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 28 00:14:29.896218 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 28 00:14:29.896225 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 28 00:14:29.896231 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 28 00:14:29.896237 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 28 00:14:29.896244 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 28 00:14:29.896262 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 28 00:14:29.896273 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 28 00:14:29.896280 kernel: psci: probing for conduit method from ACPI. Apr 28 00:14:29.896286 kernel: psci: PSCIv1.1 detected in firmware. Apr 28 00:14:29.896296 kernel: psci: Using standard PSCI v0.2 function IDs Apr 28 00:14:29.896302 kernel: psci: Trusted OS migration not required Apr 28 00:14:29.896309 kernel: psci: SMC Calling Convention v1.1 Apr 28 00:14:29.896318 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 28 00:14:29.896325 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 28 00:14:29.896332 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 28 00:14:29.896338 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 28 00:14:29.896345 kernel: Detected PIPT I-cache on CPU0 Apr 28 00:14:29.896352 kernel: CPU features: detected: GIC system register CPU interface Apr 28 00:14:29.896359 kernel: CPU features: detected: Hardware dirty bit management Apr 28 00:14:29.896366 kernel: CPU features: detected: Spectre-v4 Apr 28 00:14:29.896373 kernel: CPU features: detected: Spectre-BHB Apr 28 00:14:29.896379 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 28 00:14:29.896388 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 28 00:14:29.896395 kernel: CPU features: detected: ARM erratum 1418040 Apr 28 00:14:29.896402 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 28 00:14:29.896408 kernel: alternatives: applying boot alternatives Apr 28 00:14:29.896417 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fbd74e24c605bcd6049a4229047ecffba5884416be782935a76f3959939199f Apr 28 00:14:29.896424 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:14:29.896431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:14:29.896438 kernel: Fallback order for Node 0: 0 Apr 28 00:14:29.896445 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 28 00:14:29.896451 kernel: Policy zone: Normal Apr 28 00:14:29.896458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:14:29.896466 kernel: software IO TLB: area num 2. Apr 28 00:14:29.896473 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 28 00:14:29.896480 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 28 00:14:29.896487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 28 00:14:29.896494 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:14:29.896501 kernel: rcu: RCU event tracing is enabled. Apr 28 00:14:29.896508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 28 00:14:29.896515 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:14:29.896522 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:14:29.896529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:14:29.896536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 28 00:14:29.896542 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 28 00:14:29.896551 kernel: GICv3: 256 SPIs implemented Apr 28 00:14:29.896557 kernel: GICv3: 0 Extended SPIs implemented Apr 28 00:14:29.896564 kernel: Root IRQ handler: gic_handle_irq Apr 28 00:14:29.896571 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 28 00:14:29.896578 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 28 00:14:29.896585 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 28 00:14:29.896592 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 28 00:14:29.896599 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 28 00:14:29.896606 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 28 00:14:29.896613 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 28 00:14:29.896620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:14:29.896629 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 28 00:14:29.896636 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 28 00:14:29.896643 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 28 00:14:29.896650 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 28 00:14:29.896657 kernel: Console: colour dummy device 80x25 Apr 28 00:14:29.896664 kernel: ACPI: Core revision 20230628 Apr 28 00:14:29.896671 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 28 00:14:29.896678 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:14:29.896685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:14:29.896695 kernel: landlock: Up and running. Apr 28 00:14:29.896704 kernel: SELinux: Initializing. Apr 28 00:14:29.896712 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:14:29.896720 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:14:29.896728 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 28 00:14:29.898065 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 28 00:14:29.898076 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:14:29.898084 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:14:29.898092 kernel: Platform MSI: ITS@0x8080000 domain created Apr 28 00:14:29.898099 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 28 00:14:29.898115 kernel: Remapping and enabling EFI services. Apr 28 00:14:29.898122 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:14:29.898129 kernel: Detected PIPT I-cache on CPU1 Apr 28 00:14:29.898137 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 28 00:14:29.898144 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 28 00:14:29.898152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 28 00:14:29.898160 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 28 00:14:29.898167 kernel: smp: Brought up 1 node, 2 CPUs Apr 28 00:14:29.898174 kernel: SMP: Total of 2 processors activated. Apr 28 00:14:29.898181 kernel: CPU features: detected: 32-bit EL0 Support Apr 28 00:14:29.898191 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 28 00:14:29.898198 kernel: CPU features: detected: Common not Private translations Apr 28 00:14:29.898211 kernel: CPU features: detected: CRC32 instructions Apr 28 00:14:29.898220 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 28 00:14:29.898227 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 28 00:14:29.898235 kernel: CPU features: detected: LSE atomic instructions Apr 28 00:14:29.898243 kernel: CPU features: detected: Privileged Access Never Apr 28 00:14:29.898261 kernel: CPU features: detected: RAS Extension Support Apr 28 00:14:29.898272 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 28 00:14:29.898280 kernel: CPU: All CPU(s) started at EL1 Apr 28 00:14:29.898288 kernel: alternatives: applying system-wide alternatives Apr 28 00:14:29.898295 kernel: devtmpfs: initialized Apr 28 00:14:29.898303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:14:29.898311 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 28 00:14:29.898318 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:14:29.898326 kernel: SMBIOS 3.0.0 present. Apr 28 00:14:29.898336 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 28 00:14:29.898343 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:14:29.898351 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 28 00:14:29.898358 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 28 00:14:29.898366 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 28 00:14:29.898373 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:14:29.898381 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Apr 28 00:14:29.898389 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:14:29.898396 kernel: cpuidle: using governor menu Apr 28 00:14:29.898405 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 28 00:14:29.898413 kernel: ASID allocator initialised with 32768 entries Apr 28 00:14:29.898420 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:14:29.898428 kernel: Serial: AMBA PL011 UART driver Apr 28 00:14:29.898436 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 28 00:14:29.898443 kernel: Modules: 0 pages in range for non-PLT usage Apr 28 00:14:29.898451 kernel: Modules: 509008 pages in range for PLT usage Apr 28 00:14:29.898458 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:14:29.898466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:14:29.898475 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 28 00:14:29.898483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 28 00:14:29.898490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:14:29.898498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:14:29.898506 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 28 00:14:29.898513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 28 00:14:29.898521 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:14:29.898528 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:14:29.898535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:14:29.898543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:14:29.898552 kernel: ACPI: Interpreter enabled Apr 28 00:14:29.898560 kernel: ACPI: Using GIC for interrupt routing Apr 28 00:14:29.898568 kernel: ACPI: MCFG table detected, 1 entries Apr 28 00:14:29.898575 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 28 00:14:29.898583 kernel: printk: console [ttyAMA0] enabled Apr 28 00:14:29.898590 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:14:29.898802 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:14:29.898913 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 28 00:14:29.898990 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 28 00:14:29.899056 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 28 00:14:29.899135 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 28 00:14:29.899146 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 28 00:14:29.899155 kernel: PCI host bridge to bus 0000:00 Apr 28 00:14:29.899232 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 28 00:14:29.899310 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 28 00:14:29.899376 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 28 00:14:29.899438 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:14:29.899523 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 28 00:14:29.899605 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 28 00:14:29.899672 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 28 00:14:29.899739 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 28 00:14:29.899821 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.900504 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 28 00:14:29.900605 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.900673 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 28 00:14:29.900748 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.900813 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 28 00:14:29.900937 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901010 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 28 00:14:29.901085 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901153 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 28 00:14:29.901228 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901313 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 28 00:14:29.901396 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901464 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 28 00:14:29.901538 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901604 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 28 00:14:29.901676 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 28 00:14:29.901741 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 28 00:14:29.901814 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 28 00:14:29.901933 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 28 00:14:29.902021 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 28 00:14:29.902093 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 28 00:14:29.903805 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 28 00:14:29.903943 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 28 00:14:29.904030 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 28 00:14:29.904110 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 28 00:14:29.904197 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 28 00:14:29.904289 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 28 00:14:29.904364 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 28 00:14:29.904444 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 28 00:14:29.904513 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 28 00:14:29.904591 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 28 00:14:29.904666 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 28 00:14:29.904734 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 28 00:14:29.904812 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 28 00:14:29.907056 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 28 00:14:29.907168 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 28 00:14:29.907283 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 28 00:14:29.907357 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 28 00:14:29.907427 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 28 00:14:29.907495 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 28 00:14:29.907567 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 28 00:14:29.907635 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 28 00:14:29.907700 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 28 00:14:29.907777 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 28 00:14:29.907843 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 28 00:14:29.907971 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 28 00:14:29.908045 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 28 00:14:29.908113 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 28 00:14:29.908181 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 28 00:14:29.908264 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 28 00:14:29.908346 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 28 00:14:29.908418 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 28 00:14:29.908490 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 28 00:14:29.908557 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 28 00:14:29.908623 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 28 00:14:29.908695 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 28 00:14:29.908767 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 28 00:14:29.911551 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 28 00:14:29.911712 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 28 00:14:29.911788 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 28 00:14:29.911873 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 28 00:14:29.911948 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 28 00:14:29.912015 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 28 00:14:29.912079 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 28 00:14:29.912148 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 28 00:14:29.912225 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 28 00:14:29.912353 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 28 00:14:29.912428 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 28 00:14:29.912493 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:14:29.912563 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 28 00:14:29.912630 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:14:29.912699 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 28 00:14:29.912766 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:14:29.912877 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 28 00:14:29.916295 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:14:29.916406 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 28 00:14:29.916478 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:14:29.916553 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 28 00:14:29.916626 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:14:29.916709 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 28 00:14:29.916778 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:14:29.916869 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 28 00:14:29.916945 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:14:29.917023 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 28 00:14:29.917100 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:14:29.917176 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 28 00:14:29.917259 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 28 00:14:29.917339 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 28 00:14:29.917411 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 28 00:14:29.917487 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 28 00:14:29.917557 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 28 00:14:29.917630 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 28 00:14:29.917700 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 28 00:14:29.917769 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 28 00:14:29.917842 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 28 00:14:29.918523 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 28 00:14:29.918599 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 28 00:14:29.918669 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 28 00:14:29.918751 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 28 00:14:29.918823 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 28 00:14:29.918905 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 28 00:14:29.918976 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 28 00:14:29.919050 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 28 00:14:29.919120 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 28 00:14:29.919184 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 28 00:14:29.919268 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 28 00:14:29.919351 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 28 00:14:29.919421 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 28 00:14:29.919490 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 28 00:14:29.919558 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 28 00:14:29.919630 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 28 00:14:29.919715 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 28 00:14:29.919845 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:14:29.920015 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 28 00:14:29.920094 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 28 00:14:29.920162 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 28 00:14:29.920228 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 28 00:14:29.920314 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:14:29.920399 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 28 00:14:29.920469 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 28 00:14:29.920538 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 28 00:14:29.920605 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 28 00:14:29.920676 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 28 00:14:29.920742 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:14:29.920817 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 28 00:14:29.920901 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 28 00:14:29.920971 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 28 00:14:29.921039 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 28 00:14:29.921106 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:14:29.921185 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 28 00:14:29.921271 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 28 00:14:29.921345 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 28 00:14:29.921414 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 28 00:14:29.921482 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 28 00:14:29.921551 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:14:29.921626 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 28 00:14:29.921697 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 28 00:14:29.921768 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 28 00:14:29.921840 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 28 00:14:29.922003 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 28 00:14:29.922075 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:14:29.922220 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 28 00:14:29.922357 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 28 00:14:29.922434 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 28 00:14:29.922567 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 28 00:14:29.922650 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 28 00:14:29.922726 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 28 00:14:29.922791 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:14:29.922936 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 28 00:14:29.923017 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 28 00:14:29.923082 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 28 00:14:29.923148 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:14:29.923215 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 28 00:14:29.923300 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 28 00:14:29.923376 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 28 00:14:29.923442 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:14:29.923513 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 28 00:14:29.923576 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 28 00:14:29.923636 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 28 00:14:29.923711 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 28 00:14:29.923773 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 28 00:14:29.923838 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:14:29.929043 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 28 00:14:29.929133 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 28 00:14:29.929198 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:14:29.929289 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 28 00:14:29.929354 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 28 00:14:29.929415 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:14:29.929502 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 28 00:14:29.929563 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 28 00:14:29.929639 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:14:29.929708 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 28 00:14:29.929769 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 28 00:14:29.929829 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:14:29.929918 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 28 00:14:29.929980 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 28 00:14:29.930043 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:14:29.930111 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 28 00:14:29.930175 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 28 00:14:29.930236 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:14:29.930315 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 28 00:14:29.930493 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 28 00:14:29.930569 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:14:29.930641 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 28 00:14:29.930703 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 28 00:14:29.930772 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:14:29.930782 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 28 00:14:29.930791 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 28 00:14:29.930799 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 28 00:14:29.930807 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 28 00:14:29.930815 kernel: iommu: Default domain type: Translated Apr 28 00:14:29.930823 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 28 00:14:29.930831 kernel: efivars: Registered efivars operations Apr 28 00:14:29.930838 kernel: vgaarb: loaded Apr 28 00:14:29.931835 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 28 00:14:29.931882 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:14:29.931898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:14:29.931907 kernel: pnp: PnP ACPI init Apr 28 00:14:29.932040 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 28 00:14:29.932053 kernel: pnp: PnP ACPI: found 1 devices Apr 28 00:14:29.932061 kernel: NET: Registered PF_INET protocol family Apr 28 00:14:29.932070 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:14:29.932085 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:14:29.932093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:14:29.932101 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:14:29.932109 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:14:29.932119 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:14:29.932127 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:14:29.932135 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:14:29.932143 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:14:29.932225 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 28 00:14:29.932239 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:14:29.932247 kernel: kvm [1]: HYP mode not available Apr 28 00:14:29.932270 kernel: Initialise system trusted keyrings Apr 28 00:14:29.932278 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:14:29.932286 kernel: Key type asymmetric registered Apr 28 00:14:29.932294 kernel: Asymmetric key parser 'x509' registered Apr 28 00:14:29.932302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 28 00:14:29.932310 kernel: io scheduler mq-deadline registered Apr 28 00:14:29.932317 kernel: io scheduler kyber registered Apr 28 00:14:29.932328 kernel: io scheduler bfq registered Apr 28 00:14:29.932337 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 28 00:14:29.932422 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 28 00:14:29.932492 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 28 00:14:29.932557 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.932629 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 28 00:14:29.932696 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 28 00:14:29.932764 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.932834 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 28 00:14:29.932920 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 28 00:14:29.932989 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.933062 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 28 00:14:29.933130 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 28 00:14:29.933201 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.933311 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 28 00:14:29.933385 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 28 00:14:29.933452 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.933524 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 28 00:14:29.933602 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 28 00:14:29.933675 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.933748 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 28 00:14:29.933817 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 28 00:14:29.933941 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.934015 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 28 00:14:29.934083 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 28 00:14:29.934154 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.934165 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 28 00:14:29.934236 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 28 00:14:29.934338 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 28 00:14:29.934420 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:14:29.934432 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 28 00:14:29.934442 kernel: ACPI: button: Power Button [PWRB] Apr 28 00:14:29.934455 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 28 00:14:29.934528 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 28 00:14:29.934604 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 28 00:14:29.934615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:14:29.934623 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 28 00:14:29.934691 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 28 00:14:29.934703 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 28 00:14:29.934711 kernel: thunder_xcv, ver 1.0 Apr 28 00:14:29.934721 kernel: thunder_bgx, ver 1.0 Apr 28 00:14:29.934729 kernel: nicpf, ver 1.0 Apr 28 00:14:29.934736 kernel: nicvf, ver 1.0 Apr 28 00:14:29.934823 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 28 00:14:29.934980 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-28T00:14:29 UTC (1777335269) Apr 28 00:14:29.934994 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 28 00:14:29.935002 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 28 00:14:29.935010 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 28 00:14:29.935022 kernel: watchdog: Hard watchdog permanently disabled Apr 28 00:14:29.935030 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:14:29.935038 kernel: Segment Routing with IPv6 Apr 28 00:14:29.935046 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:14:29.935055 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:14:29.935065 kernel: Key type dns_resolver registered Apr 28 00:14:29.935073 kernel: registered taskstats version 1 Apr 28 00:14:29.935081 kernel: Loading compiled-in X.509 certificates Apr 28 00:14:29.935090 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6c96a5ff031ece119b3ff0073294cdad6eea39a2' Apr 28 00:14:29.935099 kernel: Key type .fscrypt registered Apr 28 00:14:29.935107 kernel: Key type fscrypt-provisioning registered Apr 28 00:14:29.935115 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:14:29.935123 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:14:29.935132 kernel: ima: No architecture policies found Apr 28 00:14:29.935140 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 28 00:14:29.935148 kernel: clk: Disabling unused clocks Apr 28 00:14:29.935156 kernel: Freeing unused kernel memory: 39424K Apr 28 00:14:29.935170 kernel: Run /init as init process Apr 28 00:14:29.935179 kernel: with arguments: Apr 28 00:14:29.935189 kernel: /init Apr 28 00:14:29.935197 kernel: with environment: Apr 28 00:14:29.935205 kernel: HOME=/ Apr 28 00:14:29.935212 kernel: TERM=linux Apr 28 00:14:29.935225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:14:29.935236 systemd[1]: Detected virtualization kvm. Apr 28 00:14:29.935248 systemd[1]: Detected architecture arm64. Apr 28 00:14:29.935271 systemd[1]: Running in initrd. Apr 28 00:14:29.935280 systemd[1]: No hostname configured, using default hostname. Apr 28 00:14:29.935288 systemd[1]: Hostname set to . Apr 28 00:14:29.935297 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:14:29.935305 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:14:29.935316 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:14:29.935324 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:14:29.935333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:14:29.935344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:14:29.935353 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:14:29.935362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:14:29.935372 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:14:29.935384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:14:29.935393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:14:29.935402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:14:29.935414 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:14:29.935429 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:14:29.935438 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:14:29.935446 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:14:29.935455 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:14:29.935463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:14:29.935471 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:14:29.935480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:14:29.935488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:14:29.935499 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:14:29.935508 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:14:29.935516 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:14:29.935525 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:14:29.935534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:14:29.935542 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:14:29.935550 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:14:29.935559 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:14:29.935569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:14:29.935578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:14:29.935613 systemd-journald[237]: Collecting audit messages is disabled. Apr 28 00:14:29.935635 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:14:29.935646 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:14:29.935654 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:14:29.935666 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:14:29.935674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:14:29.935683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:14:29.935694 systemd-journald[237]: Journal started Apr 28 00:14:29.935714 systemd-journald[237]: Runtime Journal (/run/log/journal/56be9286ae974819b612fea011a8afdc) is 8.0M, max 76.6M, 68.6M free. Apr 28 00:14:29.937906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:14:29.911698 systemd-modules-load[238]: Inserted module 'overlay' Apr 28 00:14:29.941846 kernel: Bridge firewalling registered Apr 28 00:14:29.941893 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:14:29.940429 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 28 00:14:29.942999 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:14:29.945973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:14:29.957494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:14:29.962571 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:14:29.967100 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:14:29.968890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:14:29.978016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:14:29.982944 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:14:29.986379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:14:29.996689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:14:30.008196 dracut-cmdline[273]: dracut-dracut-053 Apr 28 00:14:30.009042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:14:30.014005 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fbd74e24c605bcd6049a4229047ecffba5884416be782935a76f3959939199f Apr 28 00:14:30.035834 systemd-resolved[278]: Positive Trust Anchors: Apr 28 00:14:30.036609 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:14:30.036644 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:14:30.046919 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 28 00:14:30.048619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:14:30.050099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:14:30.081909 kernel: SCSI subsystem initialized Apr 28 00:14:30.086883 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:14:30.094929 kernel: iscsi: registered transport (tcp) Apr 28 00:14:30.108993 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:14:30.109123 kernel: QLogic iSCSI HBA Driver Apr 28 00:14:30.157181 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:14:30.168482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:14:30.189954 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:14:30.190026 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:14:30.190038 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:14:30.240937 kernel: raid6: neonx8 gen() 15498 MB/s Apr 28 00:14:30.257930 kernel: raid6: neonx4 gen() 15473 MB/s Apr 28 00:14:30.274917 kernel: raid6: neonx2 gen() 13117 MB/s Apr 28 00:14:30.291910 kernel: raid6: neonx1 gen() 10415 MB/s Apr 28 00:14:30.308924 kernel: raid6: int64x8 gen() 6937 MB/s Apr 28 00:14:30.325956 kernel: raid6: int64x4 gen() 7312 MB/s Apr 28 00:14:30.342916 kernel: raid6: int64x2 gen() 6077 MB/s Apr 28 00:14:30.359923 kernel: raid6: int64x1 gen() 5041 MB/s Apr 28 00:14:30.360015 kernel: raid6: using algorithm neonx8 gen() 15498 MB/s Apr 28 00:14:30.376938 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Apr 28 00:14:30.377026 kernel: raid6: using neon recovery algorithm Apr 28 00:14:30.381893 kernel: xor: measuring software checksum speed Apr 28 00:14:30.381951 kernel: 8regs : 19769 MB/sec Apr 28 00:14:30.381971 kernel: 32regs : 17359 MB/sec Apr 28 00:14:30.382941 kernel: arm64_neon : 25433 MB/sec Apr 28 00:14:30.382996 kernel: xor: using function: arm64_neon (25433 MB/sec) Apr 28 00:14:30.435150 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:14:30.452946 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:14:30.459113 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:14:30.487104 systemd-udevd[459]: Using default interface naming scheme 'v255'. Apr 28 00:14:30.490775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:14:30.500071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:14:30.516886 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Apr 28 00:14:30.559011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:14:30.567087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:14:30.618742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:14:30.628026 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:14:30.644185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:14:30.647598 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:14:30.649805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:14:30.651540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:14:30.659073 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:14:30.681777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:14:30.742931 kernel: scsi host0: Virtio SCSI HBA Apr 28 00:14:30.745920 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:14:30.746029 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 28 00:14:30.755927 kernel: ACPI: bus type USB registered Apr 28 00:14:30.756004 kernel: usbcore: registered new interface driver usbfs Apr 28 00:14:30.763104 kernel: usbcore: registered new interface driver hub Apr 28 00:14:30.771882 kernel: usbcore: registered new device driver usb Apr 28 00:14:30.800119 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 28 00:14:30.799204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:14:30.799348 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:14:30.800399 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:14:30.806955 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 28 00:14:30.807139 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:14:30.801964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:14:30.802166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:14:30.803193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:14:30.810935 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:14:30.812504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:14:30.821051 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 28 00:14:30.823172 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 28 00:14:30.823294 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 28 00:14:30.823384 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 28 00:14:30.823471 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 28 00:14:30.830943 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:14:30.831017 kernel: GPT:17805311 != 80003071 Apr 28 00:14:30.831040 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:14:30.831050 kernel: GPT:17805311 != 80003071 Apr 28 00:14:30.831059 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:14:30.831068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:14:30.832880 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 28 00:14:30.844572 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 28 00:14:30.844795 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 28 00:14:30.844904 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 28 00:14:30.847366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:14:30.852093 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 28 00:14:30.852509 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 28 00:14:30.854039 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 28 00:14:30.854284 kernel: hub 1-0:1.0: USB hub found Apr 28 00:14:30.854905 kernel: hub 1-0:1.0: 4 ports detected Apr 28 00:14:30.855870 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 28 00:14:30.856864 kernel: hub 2-0:1.0: USB hub found Apr 28 00:14:30.857034 kernel: hub 2-0:1.0: 4 ports detected Apr 28 00:14:30.858411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:14:30.886909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:14:30.905557 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (506) Apr 28 00:14:30.905621 kernel: BTRFS: device fsid 4ceb9780-605b-47f7-8c1f-b3fcb9f87ddc devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (514) Apr 28 00:14:30.907266 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 28 00:14:30.914184 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 28 00:14:30.926379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 28 00:14:30.928276 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 28 00:14:30.934944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 28 00:14:30.942109 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:14:30.950233 disk-uuid[575]: Primary Header is updated. Apr 28 00:14:30.950233 disk-uuid[575]: Secondary Entries is updated. Apr 28 00:14:30.950233 disk-uuid[575]: Secondary Header is updated. Apr 28 00:14:30.955904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:14:31.097887 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 28 00:14:31.233895 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 28 00:14:31.234887 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 28 00:14:31.236099 kernel: usbcore: registered new interface driver usbhid Apr 28 00:14:31.236129 kernel: usbhid: USB HID core driver Apr 28 00:14:31.339008 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 28 00:14:31.470001 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 28 00:14:31.523910 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 28 00:14:31.969983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:14:31.970573 disk-uuid[576]: The operation has completed successfully. Apr 28 00:14:32.028832 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:14:32.030898 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:14:32.049155 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:14:32.055699 sh[595]: Success Apr 28 00:14:32.071897 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 28 00:14:32.122999 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:14:32.132620 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:14:32.135907 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:14:32.151977 kernel: BTRFS info (device dm-0): first mount of filesystem 4ceb9780-605b-47f7-8c1f-b3fcb9f87ddc Apr 28 00:14:32.152048 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:14:32.152064 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:14:32.153340 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:14:32.154891 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:14:32.160909 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 28 00:14:32.163970 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:14:32.165981 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:14:32.172118 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:14:32.177118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:14:32.187919 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:14:32.188016 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:14:32.188043 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:14:32.191969 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:14:32.192010 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:14:32.203868 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:14:32.204917 kernel: BTRFS info (device sda6): last unmount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:14:32.215923 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:14:32.225215 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:14:32.315595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:14:32.325071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:14:32.336284 ignition[679]: Ignition 2.19.0 Apr 28 00:14:32.337021 ignition[679]: Stage: fetch-offline Apr 28 00:14:32.337087 ignition[679]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:32.337097 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:32.337299 ignition[679]: parsed url from cmdline: "" Apr 28 00:14:32.337302 ignition[679]: no config URL provided Apr 28 00:14:32.337307 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:14:32.337315 ignition[679]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:14:32.343114 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:14:32.337321 ignition[679]: failed to fetch config: resource requires networking Apr 28 00:14:32.338367 ignition[679]: Ignition finished successfully Apr 28 00:14:32.353477 systemd-networkd[781]: lo: Link UP Apr 28 00:14:32.353496 systemd-networkd[781]: lo: Gained carrier Apr 28 00:14:32.355381 systemd-networkd[781]: Enumeration completed Apr 28 00:14:32.355941 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:14:32.356683 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:32.356686 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:14:32.358097 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:32.358100 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:14:32.358815 systemd-networkd[781]: eth0: Link UP Apr 28 00:14:32.358819 systemd-networkd[781]: eth0: Gained carrier Apr 28 00:14:32.358829 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:32.361097 systemd[1]: Reached target network.target - Network. Apr 28 00:14:32.366209 systemd-networkd[781]: eth1: Link UP Apr 28 00:14:32.366214 systemd-networkd[781]: eth1: Gained carrier Apr 28 00:14:32.366239 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:32.371134 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 28 00:14:32.386322 ignition[784]: Ignition 2.19.0 Apr 28 00:14:32.386332 ignition[784]: Stage: fetch Apr 28 00:14:32.386559 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:32.386568 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:32.386670 ignition[784]: parsed url from cmdline: "" Apr 28 00:14:32.386673 ignition[784]: no config URL provided Apr 28 00:14:32.386678 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:14:32.386685 ignition[784]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:14:32.386708 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 28 00:14:32.387196 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 28 00:14:32.397938 systemd-networkd[781]: eth0: DHCPv4 address 128.140.91.51/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 28 00:14:32.454963 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 28 00:14:32.587999 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 28 00:14:32.593199 ignition[784]: GET result: OK Apr 28 00:14:32.593390 ignition[784]: parsing config with SHA512: 387af84a9be87bb265c5b4d98a49469f621af02af95a6a15744204fbb993aaf5fc82ce59ec9d1f88aa889d5ced9db321080155e478c81a247672221049e2f633 Apr 28 00:14:32.598980 unknown[784]: fetched base config from "system" Apr 28 00:14:32.598991 unknown[784]: fetched base config from "system" Apr 28 00:14:32.599460 ignition[784]: fetch: fetch complete Apr 28 00:14:32.598999 unknown[784]: fetched user config from "hetzner" Apr 28 00:14:32.599465 ignition[784]: fetch: fetch passed Apr 28 00:14:32.602592 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 28 00:14:32.599513 ignition[784]: Ignition finished successfully Apr 28 00:14:32.608187 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:14:32.625582 ignition[792]: Ignition 2.19.0 Apr 28 00:14:32.625604 ignition[792]: Stage: kargs Apr 28 00:14:32.626097 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:32.626112 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:32.627433 ignition[792]: kargs: kargs passed Apr 28 00:14:32.627510 ignition[792]: Ignition finished successfully Apr 28 00:14:32.632460 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:14:32.642222 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:14:32.657696 ignition[798]: Ignition 2.19.0 Apr 28 00:14:32.657708 ignition[798]: Stage: disks Apr 28 00:14:32.657941 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:32.657951 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:32.661546 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:14:32.658999 ignition[798]: disks: disks passed Apr 28 00:14:32.662584 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:14:32.659062 ignition[798]: Ignition finished successfully Apr 28 00:14:32.663645 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:14:32.664757 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:14:32.665987 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:14:32.666809 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:14:32.674142 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:14:32.691457 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 28 00:14:32.699291 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:14:32.704045 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:14:32.759920 kernel: EXT4-fs (sda9): mounted filesystem 2d8f83b6-5f3b-4fc5-b0f6-3405e8e67f7b r/w with ordered data mode. Quota mode: none. Apr 28 00:14:32.760334 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:14:32.761804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:14:32.770052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:14:32.773550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:14:32.776058 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 28 00:14:32.776874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:14:32.776908 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:14:32.788982 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:14:32.792875 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Apr 28 00:14:32.794486 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:14:32.794545 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:14:32.795122 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:14:32.801907 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:14:32.801989 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:14:32.806147 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:14:32.811316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:14:32.845597 coreos-metadata[816]: Apr 28 00:14:32.845 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 28 00:14:32.847883 coreos-metadata[816]: Apr 28 00:14:32.847 INFO Fetch successful Apr 28 00:14:32.850134 coreos-metadata[816]: Apr 28 00:14:32.848 INFO wrote hostname ci-4081-3-7-n-d098215774 to /sysroot/etc/hostname Apr 28 00:14:32.851510 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 28 00:14:32.867678 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:14:32.875011 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:14:32.880860 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:14:32.887156 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:14:32.997957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:14:33.005101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:14:33.009191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:14:33.018928 kernel: BTRFS info (device sda6): last unmount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:14:33.051365 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:14:33.054311 ignition[930]: INFO : Ignition 2.19.0 Apr 28 00:14:33.054311 ignition[930]: INFO : Stage: mount Apr 28 00:14:33.055476 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:33.055476 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:33.058332 ignition[930]: INFO : mount: mount passed Apr 28 00:14:33.058332 ignition[930]: INFO : Ignition finished successfully Apr 28 00:14:33.059566 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:14:33.076115 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:14:33.151212 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:14:33.162180 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:14:33.172107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Apr 28 00:14:33.174051 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:14:33.174123 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:14:33.174147 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:14:33.177880 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:14:33.177944 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:14:33.181045 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:14:33.203388 ignition[959]: INFO : Ignition 2.19.0 Apr 28 00:14:33.204266 ignition[959]: INFO : Stage: files Apr 28 00:14:33.204953 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:33.206662 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:33.206662 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:14:33.208686 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:14:33.209564 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:14:33.213887 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:14:33.214946 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:14:33.216674 unknown[959]: wrote ssh authorized keys file for user: core Apr 28 00:14:33.219353 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:14:33.219353 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 28 00:14:33.219353 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 28 00:14:33.300847 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:14:33.369783 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:14:33.370955 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 28 00:14:33.378942 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 28 00:14:33.612389 systemd-networkd[781]: eth1: Gained IPv6LL Apr 28 00:14:33.723259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 00:14:34.316150 systemd-networkd[781]: eth0: Gained IPv6LL Apr 28 00:14:34.390165 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 28 00:14:34.390165 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:14:34.395301 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:14:34.395301 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:14:34.395301 ignition[959]: INFO : files: files passed Apr 28 00:14:34.395301 ignition[959]: INFO : Ignition finished successfully Apr 28 00:14:34.400885 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:14:34.418899 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:14:34.421808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:14:34.428650 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:14:34.435542 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:14:34.452926 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:14:34.452926 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:14:34.457006 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:14:34.459055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:14:34.460420 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:14:34.467153 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:14:34.510673 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:14:34.510834 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:14:34.512778 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:14:34.513532 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:14:34.514710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:14:34.522178 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:14:34.539838 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:14:34.545162 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:14:34.561764 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:14:34.562780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:14:34.564198 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:14:34.565743 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:14:34.565925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:14:34.567642 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:14:34.568920 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:14:34.569995 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:14:34.571124 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:14:34.572465 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:14:34.573702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:14:34.574738 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:14:34.575952 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:14:34.577352 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:14:34.578490 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:14:34.579401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:14:34.579533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:14:34.580957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:14:34.581665 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:14:34.582771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:14:34.583306 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:14:34.584026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:14:34.584151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:14:34.585819 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:14:34.585986 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:14:34.587543 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:14:34.587646 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:14:34.588591 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 28 00:14:34.588690 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 28 00:14:34.595086 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:14:34.599138 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:14:34.599687 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:14:34.599826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:14:34.603603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:14:34.603927 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:14:34.613089 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:14:34.613198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:14:34.620600 ignition[1011]: INFO : Ignition 2.19.0 Apr 28 00:14:34.621411 ignition[1011]: INFO : Stage: umount Apr 28 00:14:34.622318 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:14:34.622318 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:14:34.623897 ignition[1011]: INFO : umount: umount passed Apr 28 00:14:34.625003 ignition[1011]: INFO : Ignition finished successfully Apr 28 00:14:34.626581 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:14:34.627529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:14:34.631997 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:14:34.632621 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:14:34.632674 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:14:34.633834 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:14:34.633927 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:14:34.635045 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 28 00:14:34.635087 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 28 00:14:34.637010 systemd[1]: Stopped target network.target - Network. Apr 28 00:14:34.638599 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:14:34.638663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:14:34.641257 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:14:34.645512 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:14:34.649547 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:14:34.650868 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:14:34.654473 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:14:34.655755 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:14:34.655818 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:14:34.656841 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:14:34.658071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:14:34.659930 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:14:34.659996 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:14:34.660665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:14:34.660711 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:14:34.662039 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:14:34.665907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:14:34.666079 systemd-networkd[781]: eth0: DHCPv6 lease lost Apr 28 00:14:34.668431 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:14:34.668539 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:14:34.669929 systemd-networkd[781]: eth1: DHCPv6 lease lost Apr 28 00:14:34.671829 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:14:34.671955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:14:34.674203 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:14:34.674377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:14:34.677082 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:14:34.677265 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:14:34.679292 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:14:34.679348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:14:34.686123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:14:34.686713 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:14:34.686792 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:14:34.689876 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:14:34.689947 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:14:34.690617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:14:34.690662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:14:34.692464 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:14:34.692512 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:14:34.698320 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:14:34.716170 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:14:34.716519 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:14:34.718492 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:14:34.718565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:14:34.719634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:14:34.719680 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:14:34.721578 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:14:34.721637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:14:34.723634 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:14:34.723694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:14:34.725465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:14:34.725518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:14:34.740416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:14:34.741976 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:14:34.742106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:14:34.749416 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:14:34.749508 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:14:34.750753 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:14:34.750816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:14:34.751892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:14:34.751947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:14:34.754163 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:14:34.754317 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:14:34.755468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:14:34.755567 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:14:34.757271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:14:34.766306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:14:34.774607 systemd[1]: Switching root. Apr 28 00:14:34.814912 systemd-journald[237]: Journal stopped Apr 28 00:14:35.694584 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 28 00:14:35.694672 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:14:35.694690 kernel: SELinux: policy capability open_perms=1 Apr 28 00:14:35.694700 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:14:35.694709 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:14:35.694719 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:14:35.694729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:14:35.694739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:14:35.694748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:14:35.694760 kernel: audit: type=1403 audit(1777335274.929:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:14:35.694772 systemd[1]: Successfully loaded SELinux policy in 36.128ms. Apr 28 00:14:35.694797 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.321ms. Apr 28 00:14:35.694809 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:14:35.694820 systemd[1]: Detected virtualization kvm. Apr 28 00:14:35.694831 systemd[1]: Detected architecture arm64. Apr 28 00:14:35.694842 systemd[1]: Detected first boot. Apr 28 00:14:35.694901 systemd[1]: Hostname set to . Apr 28 00:14:35.694920 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:14:35.694931 zram_generator::config[1053]: No configuration found. Apr 28 00:14:35.694943 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:14:35.694953 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:14:35.694964 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:14:35.694974 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:14:35.694986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:14:35.694996 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:14:35.695007 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:14:35.695019 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:14:35.695030 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:14:35.695041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:14:35.695052 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:14:35.695062 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:14:35.695073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:14:35.695084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:14:35.695095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:14:35.695108 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:14:35.695121 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:14:35.695132 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:14:35.695143 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 28 00:14:35.695153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:14:35.695164 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:14:35.695174 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:14:35.695186 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:14:35.695238 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:14:35.695255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:14:35.695266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:14:35.695277 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:14:35.695288 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:14:35.695298 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:14:35.695309 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:14:35.695319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:14:35.695333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:14:35.695344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:14:35.695355 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:14:35.695367 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:14:35.695378 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:14:35.695388 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:14:35.695399 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:14:35.695409 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:14:35.695419 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:14:35.695432 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:14:35.695443 systemd[1]: Reached target machines.target - Containers. Apr 28 00:14:35.695453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:14:35.695464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:14:35.695475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:14:35.695488 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:14:35.695500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:14:35.695513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:14:35.695524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:14:35.695535 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:14:35.695546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:14:35.695557 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:14:35.695568 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:14:35.695580 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:14:35.695591 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:14:35.695603 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:14:35.695613 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:14:35.695624 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:14:35.695635 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:14:35.695646 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:14:35.695656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:14:35.695667 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 00:14:35.695680 systemd[1]: Stopped verity-setup.service. Apr 28 00:14:35.695690 kernel: loop: module loaded Apr 28 00:14:35.695705 kernel: fuse: init (API version 7.39) Apr 28 00:14:35.695715 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:14:35.695726 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:14:35.695739 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:14:35.695751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:14:35.695763 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:14:35.695774 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:14:35.695785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:14:35.695796 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:14:35.695806 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:14:35.695817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:14:35.695828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:14:35.695842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:14:35.700459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:14:35.700566 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:14:35.700581 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:14:35.700599 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:14:35.700612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:14:35.700623 kernel: ACPI: bus type drm_connector registered Apr 28 00:14:35.700673 systemd-journald[1124]: Collecting audit messages is disabled. Apr 28 00:14:35.700696 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:14:35.700707 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:14:35.700718 systemd-journald[1124]: Journal started Apr 28 00:14:35.700743 systemd-journald[1124]: Runtime Journal (/run/log/journal/56be9286ae974819b612fea011a8afdc) is 8.0M, max 76.6M, 68.6M free. Apr 28 00:14:35.403133 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:14:35.422139 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 28 00:14:35.422551 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:14:35.708882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:14:35.712250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:14:35.712334 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:14:35.713870 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:14:35.714074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:14:35.715111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:14:35.716206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:14:35.720520 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:14:35.721798 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:14:35.723421 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:14:35.740778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:14:35.751958 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:14:35.753026 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Apr 28 00:14:35.753043 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Apr 28 00:14:35.753575 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:14:35.753606 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:14:35.755788 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:14:35.765235 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:14:35.770940 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:14:35.771814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:14:35.774229 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:14:35.781267 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:14:35.782010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:14:35.793982 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:14:35.805334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:14:35.818279 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:14:35.823000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:14:35.824489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:14:35.826938 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:14:35.832334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:14:35.839312 systemd-journald[1124]: Time spent on flushing to /var/log/journal/56be9286ae974819b612fea011a8afdc is 67.640ms for 1129 entries. Apr 28 00:14:35.839312 systemd-journald[1124]: System Journal (/var/log/journal/56be9286ae974819b612fea011a8afdc) is 8.0M, max 584.8M, 576.8M free. Apr 28 00:14:35.922462 systemd-journald[1124]: Received client request to flush runtime journal. Apr 28 00:14:35.922530 kernel: loop0: detected capacity change from 0 to 200864 Apr 28 00:14:35.922553 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:14:35.922570 kernel: loop1: detected capacity change from 0 to 8 Apr 28 00:14:35.845994 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:14:35.858364 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:14:35.899755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:14:35.910257 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:14:35.928301 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:14:35.935734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:14:35.941352 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 28 00:14:35.951957 kernel: loop2: detected capacity change from 0 to 114432 Apr 28 00:14:35.948694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:14:35.953428 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:14:35.978989 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:14:35.989970 kernel: loop3: detected capacity change from 0 to 114328 Apr 28 00:14:35.991927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:14:36.016615 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 28 00:14:36.017300 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 28 00:14:36.024909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:14:36.030893 kernel: loop4: detected capacity change from 0 to 200864 Apr 28 00:14:36.058996 kernel: loop5: detected capacity change from 0 to 8 Apr 28 00:14:36.062973 kernel: loop6: detected capacity change from 0 to 114432 Apr 28 00:14:36.083361 kernel: loop7: detected capacity change from 0 to 114328 Apr 28 00:14:36.097362 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 28 00:14:36.097846 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 28 00:14:36.107329 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:14:36.107507 systemd[1]: Reloading... Apr 28 00:14:36.239946 zram_generator::config[1218]: No configuration found. Apr 28 00:14:36.413957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:14:36.466544 systemd[1]: Reloading finished in 355 ms. Apr 28 00:14:36.470387 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:14:36.503920 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:14:36.507227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:14:36.521766 systemd[1]: Starting ensure-sysext.service... Apr 28 00:14:36.528281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:14:36.537265 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:14:36.537287 systemd[1]: Reloading... Apr 28 00:14:36.578011 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:14:36.578337 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:14:36.579052 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:14:36.579289 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 28 00:14:36.579336 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 28 00:14:36.582595 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:14:36.582610 systemd-tmpfiles[1260]: Skipping /boot Apr 28 00:14:36.591747 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:14:36.591769 systemd-tmpfiles[1260]: Skipping /boot Apr 28 00:14:36.656877 zram_generator::config[1286]: No configuration found. Apr 28 00:14:36.766803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:14:36.816050 systemd[1]: Reloading finished in 278 ms. Apr 28 00:14:36.842162 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:14:36.849562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:14:36.864093 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:14:36.875182 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:14:36.880324 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:14:36.894914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:14:36.911161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:14:36.915107 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:14:36.925094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:14:36.930580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:14:36.940986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:14:36.955958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:14:36.962112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:14:36.963756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:14:36.965786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:14:36.973183 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:14:36.974939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:14:36.978763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:14:36.980515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:14:36.986911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:14:36.989128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:14:37.006439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:14:37.018311 augenrules[1355]: No rules Apr 28 00:14:37.017950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:14:37.022493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:14:37.027083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:14:37.032363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:14:37.034203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:14:37.041347 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:14:37.046953 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:14:37.051345 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:14:37.053729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:14:37.057534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:14:37.057705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:14:37.062715 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Apr 28 00:14:37.068506 systemd[1]: Finished ensure-sysext.service. Apr 28 00:14:37.070951 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:14:37.079358 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:14:37.079570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:14:37.086749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:14:37.091802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:14:37.092082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:14:37.094602 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:14:37.095297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:14:37.098459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:14:37.098576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:14:37.108038 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:14:37.108917 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:14:37.122987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:14:37.132067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:14:37.162845 systemd-resolved[1329]: Positive Trust Anchors: Apr 28 00:14:37.163205 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:14:37.163293 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:14:37.170525 systemd-resolved[1329]: Using system hostname 'ci-4081-3-7-n-d098215774'. Apr 28 00:14:37.175336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:14:37.176297 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:14:37.197066 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:14:37.198448 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:14:37.219304 systemd-networkd[1381]: lo: Link UP Apr 28 00:14:37.219657 systemd-networkd[1381]: lo: Gained carrier Apr 28 00:14:37.220937 systemd-networkd[1381]: Enumeration completed Apr 28 00:14:37.221144 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:14:37.222317 systemd[1]: Reached target network.target - Network. Apr 28 00:14:37.231465 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:14:37.278096 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 28 00:14:37.373923 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:37.373934 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:14:37.374716 systemd-networkd[1381]: eth0: Link UP Apr 28 00:14:37.374726 systemd-networkd[1381]: eth0: Gained carrier Apr 28 00:14:37.374742 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:37.378878 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:14:37.405608 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:37.405620 systemd-networkd[1381]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:14:37.407832 systemd-networkd[1381]: eth1: Link UP Apr 28 00:14:37.407843 systemd-networkd[1381]: eth1: Gained carrier Apr 28 00:14:37.407872 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:14:37.414592 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 28 00:14:37.414730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:14:37.422534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:14:37.439069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:14:37.441933 systemd-networkd[1381]: eth0: DHCPv4 address 128.140.91.51/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 28 00:14:37.442876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:14:37.443495 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 28 00:14:37.444078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:14:37.444110 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:14:37.454432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:14:37.456258 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:14:37.458386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:14:37.458534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:14:37.462868 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1402) Apr 28 00:14:37.462927 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 28 00:14:37.462960 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 28 00:14:37.462982 kernel: [drm] features: -context_init Apr 28 00:14:37.463139 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:14:37.463328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:14:37.465866 kernel: [drm] number of scanouts: 1 Apr 28 00:14:37.465971 kernel: [drm] number of cap sets: 0 Apr 28 00:14:37.471140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:14:37.471199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:14:37.473993 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 28 00:14:37.476057 systemd-networkd[1381]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 28 00:14:37.476461 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 28 00:14:37.476871 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 28 00:14:37.481255 kernel: Console: switching to colour frame buffer device 160x50 Apr 28 00:14:37.500887 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 28 00:14:37.517433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 28 00:14:37.539519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:14:37.554356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:14:37.557645 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:14:37.619421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:14:37.673160 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:14:37.680156 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:14:37.694212 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:14:37.723693 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:14:37.725962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:14:37.727154 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:14:37.728067 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:14:37.729081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:14:37.730143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:14:37.731013 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:14:37.731831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:14:37.732661 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:14:37.732792 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:14:37.733477 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:14:37.735427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:14:37.737752 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:14:37.744622 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:14:37.747371 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:14:37.748968 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:14:37.749938 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:14:37.750890 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:14:37.751628 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:14:37.751743 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:14:37.753287 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:14:37.760389 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:14:37.758824 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 28 00:14:37.768415 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:14:37.772053 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:14:37.779132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:14:37.779733 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:14:37.785008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:14:37.791051 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:14:37.796137 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 28 00:14:37.799840 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:14:37.806097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:14:37.816896 jq[1450]: false Apr 28 00:14:37.820284 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:14:37.821702 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:14:37.822401 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:14:37.824401 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:14:37.830436 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:14:37.848271 coreos-metadata[1448]: Apr 28 00:14:37.831 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 28 00:14:37.848271 coreos-metadata[1448]: Apr 28 00:14:37.838 INFO Fetch successful Apr 28 00:14:37.848271 coreos-metadata[1448]: Apr 28 00:14:37.838 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 28 00:14:37.848271 coreos-metadata[1448]: Apr 28 00:14:37.840 INFO Fetch successful Apr 28 00:14:37.848576 extend-filesystems[1453]: Found loop4 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found loop5 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found loop6 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found loop7 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda1 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda2 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda3 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found usr Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda4 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda6 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda7 Apr 28 00:14:37.848576 extend-filesystems[1453]: Found sda9 Apr 28 00:14:37.848576 extend-filesystems[1453]: Checking size of /dev/sda9 Apr 28 00:14:37.911605 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 28 00:14:37.832552 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:14:37.914917 extend-filesystems[1453]: Resized partition /dev/sda9 Apr 28 00:14:37.900608 dbus-daemon[1449]: [system] SELinux support is enabled Apr 28 00:14:37.836758 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:14:37.923969 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:14:37.837149 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:14:37.850679 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:14:37.851080 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:14:37.945275 jq[1462]: true Apr 28 00:14:37.901288 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:14:37.902220 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:14:37.946585 tar[1464]: linux-arm64/LICENSE Apr 28 00:14:37.946585 tar[1464]: linux-arm64/helm Apr 28 00:14:37.905345 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:14:37.919660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:14:37.919699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:14:37.947120 jq[1490]: true Apr 28 00:14:37.920697 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:14:37.922054 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:14:37.922073 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:14:37.983806 update_engine[1461]: I20260428 00:14:37.982399 1461 main.cc:92] Flatcar Update Engine starting Apr 28 00:14:37.989611 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:14:37.993168 update_engine[1461]: I20260428 00:14:37.992938 1461 update_check_scheduler.cc:74] Next update check in 9m48s Apr 28 00:14:38.001242 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:14:38.047487 systemd-logind[1460]: New seat seat0. Apr 28 00:14:38.056771 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Apr 28 00:14:38.056803 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 28 00:14:38.058923 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:14:38.072876 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1404) Apr 28 00:14:38.084006 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 28 00:14:38.113293 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 28 00:14:38.113293 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 28 00:14:38.113293 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 28 00:14:38.115919 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Apr 28 00:14:38.115919 extend-filesystems[1453]: Found sr0 Apr 28 00:14:38.118746 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:14:38.132290 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:14:38.132525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:14:38.134603 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:14:38.155220 systemd[1]: Starting sshkeys.service... Apr 28 00:14:38.157200 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 28 00:14:38.160547 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:14:38.169247 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 28 00:14:38.182196 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 28 00:14:38.246398 coreos-metadata[1533]: Apr 28 00:14:38.246 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 28 00:14:38.251219 coreos-metadata[1533]: Apr 28 00:14:38.251 INFO Fetch successful Apr 28 00:14:38.253014 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:14:38.257112 unknown[1533]: wrote ssh authorized keys file for user: core Apr 28 00:14:38.288568 containerd[1482]: time="2026-04-28T00:14:38.288474600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:14:38.291600 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:14:38.295903 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 28 00:14:38.298480 systemd[1]: Finished sshkeys.service. Apr 28 00:14:38.324545 containerd[1482]: time="2026-04-28T00:14:38.324309280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328156 containerd[1482]: time="2026-04-28T00:14:38.328104760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328156 containerd[1482]: time="2026-04-28T00:14:38.328150600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:14:38.328311 containerd[1482]: time="2026-04-28T00:14:38.328171200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:14:38.328423 containerd[1482]: time="2026-04-28T00:14:38.328397440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:14:38.328459 containerd[1482]: time="2026-04-28T00:14:38.328427040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328519 containerd[1482]: time="2026-04-28T00:14:38.328500080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328544 containerd[1482]: time="2026-04-28T00:14:38.328517600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328725 containerd[1482]: time="2026-04-28T00:14:38.328703120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328750 containerd[1482]: time="2026-04-28T00:14:38.328726640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328750 containerd[1482]: time="2026-04-28T00:14:38.328742840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328794 containerd[1482]: time="2026-04-28T00:14:38.328753200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.328885 containerd[1482]: time="2026-04-28T00:14:38.328834640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.329859 containerd[1482]: time="2026-04-28T00:14:38.329066000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:14:38.329859 containerd[1482]: time="2026-04-28T00:14:38.329192920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:14:38.329859 containerd[1482]: time="2026-04-28T00:14:38.329208000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:14:38.329859 containerd[1482]: time="2026-04-28T00:14:38.329286960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:14:38.329859 containerd[1482]: time="2026-04-28T00:14:38.329328200Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:14:38.339575 containerd[1482]: time="2026-04-28T00:14:38.339518040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:14:38.339677 containerd[1482]: time="2026-04-28T00:14:38.339608600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:14:38.339677 containerd[1482]: time="2026-04-28T00:14:38.339628440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:14:38.339677 containerd[1482]: time="2026-04-28T00:14:38.339645920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:14:38.339677 containerd[1482]: time="2026-04-28T00:14:38.339662280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:14:38.340993 containerd[1482]: time="2026-04-28T00:14:38.340960040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341371680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341524280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341543280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341557360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341571320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341585160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341599320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341614040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341629200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341642400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341655280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341668360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341690080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.342865 containerd[1482]: time="2026-04-28T00:14:38.341706720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341719200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341732680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341745520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341758560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341771200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341785720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341801200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341815760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341828880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341841160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341875720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341895520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341918600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341931680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.343198 containerd[1482]: time="2026-04-28T00:14:38.341943240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346412760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346473680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346487360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346515200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346525640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346544200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346555800Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:14:38.348411 containerd[1482]: time="2026-04-28T00:14:38.346567160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.346944720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.347006360Z" level=info msg="Connect containerd service" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.347043680Z" level=info msg="using legacy CRI server" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.347050200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.347143440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.347935640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348239320Z" level=info msg="Start subscribing containerd event" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348304760Z" level=info msg="Start recovering state" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348389280Z" level=info msg="Start event monitor" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348401480Z" level=info msg="Start snapshots syncer" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348411320Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:14:38.348643 containerd[1482]: time="2026-04-28T00:14:38.348419600Z" level=info msg="Start streaming server" Apr 28 00:14:38.351061 containerd[1482]: time="2026-04-28T00:14:38.351030840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:14:38.351268 containerd[1482]: time="2026-04-28T00:14:38.351236200Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:14:38.351407 containerd[1482]: time="2026-04-28T00:14:38.351392240Z" level=info msg="containerd successfully booted in 0.064100s" Apr 28 00:14:38.351528 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:14:38.604099 systemd-networkd[1381]: eth1: Gained IPv6LL Apr 28 00:14:38.607998 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 28 00:14:38.610952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:14:38.614610 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:14:38.624135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:14:38.627395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:14:38.666961 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:14:38.684492 tar[1464]: linux-arm64/README.md Apr 28 00:14:38.703915 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:14:38.710196 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:14:38.819701 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:14:38.842410 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:14:38.852456 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:14:38.858339 systemd[1]: Started sshd@0-128.140.91.51:22-203.34.56.186:39188.service - OpenSSH per-connection server daemon (203.34.56.186:39188). Apr 28 00:14:38.872523 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:14:38.874947 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:14:38.885393 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:14:38.897912 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:14:38.908397 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:14:38.912304 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 28 00:14:38.914905 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:14:39.116046 systemd-networkd[1381]: eth0: Gained IPv6LL Apr 28 00:14:39.116796 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 28 00:14:39.434249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:14:39.437681 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:14:39.441039 systemd[1]: Startup finished in 814ms (kernel) + 5.234s (initrd) + 4.547s (userspace) = 10.596s. Apr 28 00:14:39.445385 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:14:39.881522 kubelet[1581]: E0428 00:14:39.881354 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:14:39.885873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:14:39.886103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:14:50.073510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:14:50.084252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:14:50.212126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:14:50.213025 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:14:50.261458 kubelet[1599]: E0428 00:14:50.261392 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:14:50.266054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:14:50.266365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:15:00.323652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:15:00.333243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:00.464105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:00.467767 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:15:00.513957 kubelet[1614]: E0428 00:15:00.513882 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:15:00.519660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:15:00.519811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:15:07.371382 systemd[1]: Started sshd@1-128.140.91.51:22-50.85.169.122:38614.service - OpenSSH per-connection server daemon (50.85.169.122:38614). Apr 28 00:15:07.498461 sshd[1622]: Accepted publickey for core from 50.85.169.122 port 38614 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:07.501365 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:07.512837 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:15:07.519340 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:15:07.524053 systemd-logind[1460]: New session 1 of user core. Apr 28 00:15:07.530898 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:15:07.541953 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:15:07.546519 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:15:07.654393 systemd[1626]: Queued start job for default target default.target. Apr 28 00:15:07.665530 systemd[1626]: Created slice app.slice - User Application Slice. Apr 28 00:15:07.665601 systemd[1626]: Reached target paths.target - Paths. Apr 28 00:15:07.665625 systemd[1626]: Reached target timers.target - Timers. Apr 28 00:15:07.668228 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:15:07.683581 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:15:07.683717 systemd[1626]: Reached target sockets.target - Sockets. Apr 28 00:15:07.683731 systemd[1626]: Reached target basic.target - Basic System. Apr 28 00:15:07.683776 systemd[1626]: Reached target default.target - Main User Target. Apr 28 00:15:07.683803 systemd[1626]: Startup finished in 130ms. Apr 28 00:15:07.683934 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:15:07.694283 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:15:07.828376 systemd[1]: Started sshd@2-128.140.91.51:22-50.85.169.122:38624.service - OpenSSH per-connection server daemon (50.85.169.122:38624). Apr 28 00:15:07.950708 sshd[1637]: Accepted publickey for core from 50.85.169.122 port 38624 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:07.952162 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:07.958963 systemd-logind[1460]: New session 2 of user core. Apr 28 00:15:07.969212 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:15:08.070791 sshd[1637]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:08.077295 systemd[1]: sshd@2-128.140.91.51:22-50.85.169.122:38624.service: Deactivated successfully. Apr 28 00:15:08.080592 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:15:08.081680 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:15:08.082842 systemd-logind[1460]: Removed session 2. Apr 28 00:15:08.104342 systemd[1]: Started sshd@3-128.140.91.51:22-50.85.169.122:38636.service - OpenSSH per-connection server daemon (50.85.169.122:38636). Apr 28 00:15:08.228542 sshd[1644]: Accepted publickey for core from 50.85.169.122 port 38636 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:08.231450 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:08.236617 systemd-logind[1460]: New session 3 of user core. Apr 28 00:15:08.245567 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:15:08.344982 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:08.348585 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:15:08.349269 systemd[1]: sshd@3-128.140.91.51:22-50.85.169.122:38636.service: Deactivated successfully. Apr 28 00:15:08.352246 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:15:08.353369 systemd-logind[1460]: Removed session 3. Apr 28 00:15:08.375490 systemd[1]: Started sshd@4-128.140.91.51:22-50.85.169.122:38642.service - OpenSSH per-connection server daemon (50.85.169.122:38642). Apr 28 00:15:08.499842 sshd[1651]: Accepted publickey for core from 50.85.169.122 port 38642 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:08.502266 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:08.509130 systemd-logind[1460]: New session 4 of user core. Apr 28 00:15:08.518246 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:15:08.623212 sshd[1651]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:08.630016 systemd[1]: sshd@4-128.140.91.51:22-50.85.169.122:38642.service: Deactivated successfully. Apr 28 00:15:08.632420 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:15:08.634800 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:15:08.636259 systemd-logind[1460]: Removed session 4. Apr 28 00:15:08.656321 systemd[1]: Started sshd@5-128.140.91.51:22-50.85.169.122:38648.service - OpenSSH per-connection server daemon (50.85.169.122:38648). Apr 28 00:15:08.777670 sshd[1658]: Accepted publickey for core from 50.85.169.122 port 38648 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:08.781225 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:08.787641 systemd-logind[1460]: New session 5 of user core. Apr 28 00:15:08.801187 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:15:08.898761 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 00:15:08.899101 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:15:08.914318 sudo[1661]: pam_unix(sudo:session): session closed for user root Apr 28 00:15:08.931631 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:08.938183 systemd[1]: sshd@5-128.140.91.51:22-50.85.169.122:38648.service: Deactivated successfully. Apr 28 00:15:08.940893 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:15:08.943809 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:15:08.967450 systemd[1]: Started sshd@6-128.140.91.51:22-50.85.169.122:38658.service - OpenSSH per-connection server daemon (50.85.169.122:38658). Apr 28 00:15:08.970969 systemd-logind[1460]: Removed session 5. Apr 28 00:15:09.083986 sshd[1666]: Accepted publickey for core from 50.85.169.122 port 38658 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:09.086233 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:09.092428 systemd-logind[1460]: New session 6 of user core. Apr 28 00:15:09.098194 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:15:09.182001 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 00:15:09.182303 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:15:09.187558 sudo[1670]: pam_unix(sudo:session): session closed for user root Apr 28 00:15:09.194259 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 00:15:09.194550 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:15:09.218031 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 00:15:09.220663 auditctl[1673]: No rules Apr 28 00:15:09.222163 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:15:09.223024 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 00:15:09.228607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:15:09.266624 augenrules[1691]: No rules Apr 28 00:15:09.268378 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:15:09.271102 sudo[1669]: pam_unix(sudo:session): session closed for user root Apr 28 00:15:09.286716 sshd[1666]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:09.292843 systemd[1]: sshd@6-128.140.91.51:22-50.85.169.122:38658.service: Deactivated successfully. Apr 28 00:15:09.296544 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:15:09.297845 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:15:09.298823 systemd-logind[1460]: Removed session 6. Apr 28 00:15:09.321612 systemd[1]: Started sshd@7-128.140.91.51:22-50.85.169.122:57464.service - OpenSSH per-connection server daemon (50.85.169.122:57464). Apr 28 00:15:09.443876 sshd[1699]: Accepted publickey for core from 50.85.169.122 port 57464 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:09.446786 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:09.453204 systemd-logind[1460]: New session 7 of user core. Apr 28 00:15:09.459165 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:15:09.543028 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:15:09.543328 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:15:09.555317 systemd-timesyncd[1379]: Contacted time server 176.9.157.155:123 (2.flatcar.pool.ntp.org). Apr 28 00:15:09.555375 systemd-timesyncd[1379]: Initial clock synchronization to Tue 2026-04-28 00:15:09.799389 UTC. Apr 28 00:15:09.852007 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:15:09.852558 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:15:10.117652 dockerd[1717]: time="2026-04-28T00:15:10.117279394Z" level=info msg="Starting up" Apr 28 00:15:10.228315 dockerd[1717]: time="2026-04-28T00:15:10.228240330Z" level=info msg="Loading containers: start." Apr 28 00:15:10.346214 kernel: Initializing XFRM netlink socket Apr 28 00:15:10.425208 systemd-networkd[1381]: docker0: Link UP Apr 28 00:15:10.447920 dockerd[1717]: time="2026-04-28T00:15:10.447623859Z" level=info msg="Loading containers: done." Apr 28 00:15:10.464178 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4064624709-merged.mount: Deactivated successfully. Apr 28 00:15:10.468551 dockerd[1717]: time="2026-04-28T00:15:10.467952458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:15:10.468551 dockerd[1717]: time="2026-04-28T00:15:10.468108358Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:15:10.468551 dockerd[1717]: time="2026-04-28T00:15:10.468273369Z" level=info msg="Daemon has completed initialization" Apr 28 00:15:10.515785 dockerd[1717]: time="2026-04-28T00:15:10.515641198Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:15:10.516700 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:15:10.573841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:15:10.584551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:10.703061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:10.707405 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:15:10.752590 kubelet[1863]: E0428 00:15:10.752543 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:15:10.755485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:15:10.755627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:15:11.067159 containerd[1482]: time="2026-04-28T00:15:11.067010695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 28 00:15:11.635120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26225698.mount: Deactivated successfully. Apr 28 00:15:12.852317 containerd[1482]: time="2026-04-28T00:15:12.852232709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:12.856178 containerd[1482]: time="2026-04-28T00:15:12.856094872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=24193866" Apr 28 00:15:12.858901 containerd[1482]: time="2026-04-28T00:15:12.857317855Z" level=info msg="ImageCreate event name:\"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:12.861057 containerd[1482]: time="2026-04-28T00:15:12.861009236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:12.862539 containerd[1482]: time="2026-04-28T00:15:12.862495023Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"24190367\" in 1.795435248s" Apr 28 00:15:12.862539 containerd[1482]: time="2026-04-28T00:15:12.862538087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:bf3fdee5548e267fd53c67a79d712e896d47f48203512415518d59da7f985228\"" Apr 28 00:15:12.863159 containerd[1482]: time="2026-04-28T00:15:12.863131772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 28 00:15:14.023912 containerd[1482]: time="2026-04-28T00:15:14.022364845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:14.024671 containerd[1482]: time="2026-04-28T00:15:14.024633057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=18901464" Apr 28 00:15:14.026335 containerd[1482]: time="2026-04-28T00:15:14.026307180Z" level=info msg="ImageCreate event name:\"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:14.031473 containerd[1482]: time="2026-04-28T00:15:14.031417333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:14.033643 containerd[1482]: time="2026-04-28T00:15:14.033603095Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"20408083\" in 1.170435888s" Apr 28 00:15:14.033643 containerd[1482]: time="2026-04-28T00:15:14.033641816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:161b12aee2701d72b2e8a7d114f5f83122603d8c5d1d3cd7f72aa6fac5d9524c\"" Apr 28 00:15:14.035012 containerd[1482]: time="2026-04-28T00:15:14.034987197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 28 00:15:15.016152 containerd[1482]: time="2026-04-28T00:15:15.016054695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:15.019775 containerd[1482]: time="2026-04-28T00:15:15.019496090Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=14047965" Apr 28 00:15:15.021407 containerd[1482]: time="2026-04-28T00:15:15.021354228Z" level=info msg="ImageCreate event name:\"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:15.027849 containerd[1482]: time="2026-04-28T00:15:15.027028135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:15.028642 containerd[1482]: time="2026-04-28T00:15:15.028597580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"15554602\" in 993.489359ms" Apr 28 00:15:15.028642 containerd[1482]: time="2026-04-28T00:15:15.028637151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:85bc0b83d6779f309f0f2d8724ee225e2a061dc60b1b127f8a9b8843bad36e14\"" Apr 28 00:15:15.029192 containerd[1482]: time="2026-04-28T00:15:15.029128570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 28 00:15:16.073411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324276654.mount: Deactivated successfully. Apr 28 00:15:16.304963 containerd[1482]: time="2026-04-28T00:15:16.304606881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:16.307143 containerd[1482]: time="2026-04-28T00:15:16.307075985Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=22606312" Apr 28 00:15:16.308892 containerd[1482]: time="2026-04-28T00:15:16.308752939Z" level=info msg="ImageCreate event name:\"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:16.315117 containerd[1482]: time="2026-04-28T00:15:16.315041973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:16.316511 containerd[1482]: time="2026-04-28T00:15:16.315896365Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"22605305\" in 1.286713422s" Apr 28 00:15:16.316511 containerd[1482]: time="2026-04-28T00:15:16.315945104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:c63683691df94ddfb3e7b1449f68fd9df087b1bda7cdecd1e9292214f6adc745\"" Apr 28 00:15:16.317196 containerd[1482]: time="2026-04-28T00:15:16.317153321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 28 00:15:16.834820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371355373.mount: Deactivated successfully. Apr 28 00:15:17.694219 containerd[1482]: time="2026-04-28T00:15:17.694150259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:17.696383 containerd[1482]: time="2026-04-28T00:15:17.696338876Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Apr 28 00:15:17.697342 containerd[1482]: time="2026-04-28T00:15:17.696713354Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:17.702144 containerd[1482]: time="2026-04-28T00:15:17.702091328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:17.703541 containerd[1482]: time="2026-04-28T00:15:17.703492736Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.386115772s" Apr 28 00:15:17.703541 containerd[1482]: time="2026-04-28T00:15:17.703537830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 28 00:15:17.704062 containerd[1482]: time="2026-04-28T00:15:17.704022089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:15:18.175139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230286211.mount: Deactivated successfully. Apr 28 00:15:18.188569 containerd[1482]: time="2026-04-28T00:15:18.187693014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:18.189403 containerd[1482]: time="2026-04-28T00:15:18.189351638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 28 00:15:18.190645 containerd[1482]: time="2026-04-28T00:15:18.190595637Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:18.200635 containerd[1482]: time="2026-04-28T00:15:18.200583601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:18.202886 containerd[1482]: time="2026-04-28T00:15:18.202167368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 497.237508ms" Apr 28 00:15:18.202886 containerd[1482]: time="2026-04-28T00:15:18.202248854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 28 00:15:18.203213 containerd[1482]: time="2026-04-28T00:15:18.203190028Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 28 00:15:18.705668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265110741.mount: Deactivated successfully. Apr 28 00:15:19.391038 containerd[1482]: time="2026-04-28T00:15:19.390957642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:19.393658 containerd[1482]: time="2026-04-28T00:15:19.393574088Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139756" Apr 28 00:15:19.395277 containerd[1482]: time="2026-04-28T00:15:19.395238673Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:19.400894 containerd[1482]: time="2026-04-28T00:15:19.399911429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:19.403742 containerd[1482]: time="2026-04-28T00:15:19.403462337Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.20013422s" Apr 28 00:15:19.403742 containerd[1482]: time="2026-04-28T00:15:19.403569189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 28 00:15:20.823494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 28 00:15:20.833602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:20.954112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:20.955763 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:15:20.993952 kubelet[2093]: E0428 00:15:20.993911 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:15:20.997012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:15:20.997308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:15:22.859873 update_engine[1461]: I20260428 00:15:22.857887 1461 update_attempter.cc:509] Updating boot flags... Apr 28 00:15:22.918708 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2108) Apr 28 00:15:22.988391 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2109) Apr 28 00:15:23.041877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2109) Apr 28 00:15:24.025148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:24.032200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:24.067472 systemd[1]: Reloading requested from client PID 2127 ('systemctl') (unit session-7.scope)... Apr 28 00:15:24.067711 systemd[1]: Reloading... Apr 28 00:15:24.200926 zram_generator::config[2172]: No configuration found. Apr 28 00:15:24.306601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:15:24.380025 systemd[1]: Reloading finished in 311 ms. Apr 28 00:15:24.437842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:24.441954 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:24.444112 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:15:24.444303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:24.451208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:24.579087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:24.586225 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:15:24.630844 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:15:24.630844 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:15:24.632667 kubelet[2219]: I0428 00:15:24.632534 2219 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:15:25.248653 kubelet[2219]: I0428 00:15:25.248510 2219 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:15:25.248653 kubelet[2219]: I0428 00:15:25.248541 2219 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:15:25.250873 kubelet[2219]: I0428 00:15:25.250193 2219 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:15:25.250873 kubelet[2219]: I0428 00:15:25.250219 2219 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:15:25.250873 kubelet[2219]: I0428 00:15:25.250512 2219 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:15:25.264011 kubelet[2219]: E0428 00:15:25.263943 2219 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://128.140.91.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:15:25.265471 kubelet[2219]: I0428 00:15:25.265443 2219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:15:25.268379 kubelet[2219]: E0428 00:15:25.268349 2219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:15:25.268619 kubelet[2219]: I0428 00:15:25.268606 2219 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:15:25.270891 kubelet[2219]: I0428 00:15:25.270849 2219 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:15:25.271285 kubelet[2219]: I0428 00:15:25.271257 2219 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:15:25.271536 kubelet[2219]: I0428 00:15:25.271352 2219 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-n-d098215774","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:15:25.271653 kubelet[2219]: I0428 00:15:25.271642 2219 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:15:25.271704 kubelet[2219]: I0428 00:15:25.271696 2219 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:15:25.271882 kubelet[2219]: I0428 00:15:25.271846 2219 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:15:25.275498 kubelet[2219]: I0428 00:15:25.275476 2219 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:15:25.277422 kubelet[2219]: I0428 00:15:25.277400 2219 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:15:25.277743 kubelet[2219]: I0428 00:15:25.277556 2219 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:15:25.277743 kubelet[2219]: I0428 00:15:25.277596 2219 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:15:25.277743 kubelet[2219]: I0428 00:15:25.277613 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:15:25.278052 kubelet[2219]: E0428 00:15:25.278020 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://128.140.91.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-n-d098215774&limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:15:25.279488 kubelet[2219]: E0428 00:15:25.279308 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://128.140.91.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:15:25.279810 kubelet[2219]: I0428 00:15:25.279759 2219 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:15:25.280970 kubelet[2219]: I0428 00:15:25.280414 2219 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:15:25.280970 kubelet[2219]: I0428 00:15:25.280457 2219 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:15:25.280970 kubelet[2219]: W0428 00:15:25.280500 2219 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:15:25.284316 kubelet[2219]: I0428 00:15:25.284298 2219 server.go:1262] "Started kubelet" Apr 28 00:15:25.285772 kubelet[2219]: I0428 00:15:25.285727 2219 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:15:25.286716 kubelet[2219]: I0428 00:15:25.286685 2219 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:15:25.288744 kubelet[2219]: I0428 00:15:25.288642 2219 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:15:25.288908 kubelet[2219]: I0428 00:15:25.288890 2219 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:15:25.289255 kubelet[2219]: I0428 00:15:25.289236 2219 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:15:25.290764 kubelet[2219]: E0428 00:15:25.289458 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.91.51:6443/api/v1/namespaces/default/events\": dial tcp 128.140.91.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-n-d098215774.18aa5d0fc6527456 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-n-d098215774,UID:ci-4081-3-7-n-d098215774,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-n-d098215774,},FirstTimestamp:2026-04-28 00:15:25.284267094 +0000 UTC m=+0.694619663,LastTimestamp:2026-04-28 00:15:25.284267094 +0000 UTC m=+0.694619663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-n-d098215774,}" Apr 28 00:15:25.292888 kubelet[2219]: I0428 00:15:25.292865 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:15:25.293407 kubelet[2219]: I0428 00:15:25.293387 2219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:15:25.296343 kubelet[2219]: E0428 00:15:25.296317 2219 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:15:25.297159 kubelet[2219]: E0428 00:15:25.297141 2219 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-d098215774\" not found" Apr 28 00:15:25.297263 kubelet[2219]: I0428 00:15:25.297253 2219 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:15:25.297503 kubelet[2219]: I0428 00:15:25.297486 2219 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:15:25.297689 kubelet[2219]: I0428 00:15:25.297675 2219 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:15:25.298645 kubelet[2219]: I0428 00:15:25.298622 2219 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:15:25.298873 kubelet[2219]: I0428 00:15:25.298843 2219 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:15:25.299486 kubelet[2219]: E0428 00:15:25.299461 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.91.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:15:25.300666 kubelet[2219]: I0428 00:15:25.300643 2219 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:15:25.307971 kubelet[2219]: E0428 00:15:25.307905 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.91.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-d098215774?timeout=10s\": dial tcp 128.140.91.51:6443: connect: connection refused" interval="200ms" Apr 28 00:15:25.320814 kubelet[2219]: I0428 00:15:25.320730 2219 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:15:25.323419 kubelet[2219]: I0428 00:15:25.323383 2219 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:15:25.323419 kubelet[2219]: I0428 00:15:25.323424 2219 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:15:25.323606 kubelet[2219]: I0428 00:15:25.323459 2219 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:15:25.323606 kubelet[2219]: E0428 00:15:25.323525 2219 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:15:25.332826 kubelet[2219]: E0428 00:15:25.331985 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://128.140.91.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 28 00:15:25.335298 kubelet[2219]: I0428 00:15:25.335278 2219 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:15:25.335559 kubelet[2219]: I0428 00:15:25.335544 2219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:15:25.335961 kubelet[2219]: I0428 00:15:25.335939 2219 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:15:25.338500 kubelet[2219]: I0428 00:15:25.338482 2219 policy_none.go:49] "None policy: Start" Apr 28 00:15:25.338611 kubelet[2219]: I0428 00:15:25.338600 2219 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:15:25.338700 kubelet[2219]: I0428 00:15:25.338671 2219 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:15:25.340497 kubelet[2219]: I0428 00:15:25.340481 2219 policy_none.go:47] "Start" Apr 28 00:15:25.345964 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:15:25.359453 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:15:25.363494 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:15:25.374353 kubelet[2219]: E0428 00:15:25.374317 2219 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:15:25.374873 kubelet[2219]: I0428 00:15:25.374554 2219 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:15:25.374873 kubelet[2219]: I0428 00:15:25.374574 2219 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:15:25.374986 kubelet[2219]: I0428 00:15:25.374906 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:15:25.376434 kubelet[2219]: E0428 00:15:25.376406 2219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:15:25.376537 kubelet[2219]: E0428 00:15:25.376466 2219 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-n-d098215774\" not found" Apr 28 00:15:25.442031 systemd[1]: Created slice kubepods-burstable-pod374fb25a9278e565a17773d2712d539e.slice - libcontainer container kubepods-burstable-pod374fb25a9278e565a17773d2712d539e.slice. Apr 28 00:15:25.452774 kubelet[2219]: E0428 00:15:25.452711 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.457666 systemd[1]: Created slice kubepods-burstable-podadbdba3950c52d07a4ff8e4ca7a68667.slice - libcontainer container kubepods-burstable-podadbdba3950c52d07a4ff8e4ca7a68667.slice. Apr 28 00:15:25.460927 kubelet[2219]: E0428 00:15:25.460645 2219 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.91.51:6443/api/v1/namespaces/default/events\": dial tcp 128.140.91.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-n-d098215774.18aa5d0fc6527456 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-n-d098215774,UID:ci-4081-3-7-n-d098215774,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-n-d098215774,},FirstTimestamp:2026-04-28 00:15:25.284267094 +0000 UTC m=+0.694619663,LastTimestamp:2026-04-28 00:15:25.284267094 +0000 UTC m=+0.694619663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-n-d098215774,}" Apr 28 00:15:25.461067 kubelet[2219]: E0428 00:15:25.461000 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.473094 systemd[1]: Created slice kubepods-burstable-pod74bf90254a294eb34d1e184038c4e7e9.slice - libcontainer container kubepods-burstable-pod74bf90254a294eb34d1e184038c4e7e9.slice. Apr 28 00:15:25.476217 kubelet[2219]: E0428 00:15:25.476165 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.477150 kubelet[2219]: I0428 00:15:25.477129 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.477753 kubelet[2219]: E0428 00:15:25.477690 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.91.51:6443/api/v1/nodes\": dial tcp 128.140.91.51:6443: connect: connection refused" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499386 kubelet[2219]: I0428 00:15:25.498560 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499386 kubelet[2219]: I0428 00:15:25.498643 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499386 kubelet[2219]: I0428 00:15:25.498702 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499386 kubelet[2219]: I0428 00:15:25.498735 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499386 kubelet[2219]: I0428 00:15:25.498770 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499620 kubelet[2219]: I0428 00:15:25.498801 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74bf90254a294eb34d1e184038c4e7e9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-n-d098215774\" (UID: \"74bf90254a294eb34d1e184038c4e7e9\") " pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499620 kubelet[2219]: I0428 00:15:25.499212 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499620 kubelet[2219]: I0428 00:15:25.499246 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.499620 kubelet[2219]: I0428 00:15:25.499275 2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:25.509711 kubelet[2219]: E0428 00:15:25.509637 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.91.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-d098215774?timeout=10s\": dial tcp 128.140.91.51:6443: connect: connection refused" interval="400ms" Apr 28 00:15:25.680836 kubelet[2219]: I0428 00:15:25.680755 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.681560 kubelet[2219]: E0428 00:15:25.681199 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.91.51:6443/api/v1/nodes\": dial tcp 128.140.91.51:6443: connect: connection refused" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:25.759184 containerd[1482]: time="2026-04-28T00:15:25.759013928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-n-d098215774,Uid:374fb25a9278e565a17773d2712d539e,Namespace:kube-system,Attempt:0,}" Apr 28 00:15:25.765022 containerd[1482]: time="2026-04-28T00:15:25.764981820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-n-d098215774,Uid:adbdba3950c52d07a4ff8e4ca7a68667,Namespace:kube-system,Attempt:0,}" Apr 28 00:15:25.780833 containerd[1482]: time="2026-04-28T00:15:25.780487335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-n-d098215774,Uid:74bf90254a294eb34d1e184038c4e7e9,Namespace:kube-system,Attempt:0,}" Apr 28 00:15:25.910356 kubelet[2219]: E0428 00:15:25.910284 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.91.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-d098215774?timeout=10s\": dial tcp 128.140.91.51:6443: connect: connection refused" interval="800ms" Apr 28 00:15:26.084070 kubelet[2219]: I0428 00:15:26.083842 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:26.084692 kubelet[2219]: E0428 00:15:26.084332 2219 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://128.140.91.51:6443/api/v1/nodes\": dial tcp 128.140.91.51:6443: connect: connection refused" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:26.139339 kubelet[2219]: E0428 00:15:26.139275 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://128.140.91.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 28 00:15:26.245938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860520061.mount: Deactivated successfully. Apr 28 00:15:26.255967 containerd[1482]: time="2026-04-28T00:15:26.255906837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:15:26.257472 containerd[1482]: time="2026-04-28T00:15:26.257423004Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:15:26.258988 containerd[1482]: time="2026-04-28T00:15:26.258949409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:15:26.259287 containerd[1482]: time="2026-04-28T00:15:26.259262052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:15:26.259428 containerd[1482]: time="2026-04-28T00:15:26.259400429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:15:26.260872 containerd[1482]: time="2026-04-28T00:15:26.260819126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 28 00:15:26.262411 containerd[1482]: time="2026-04-28T00:15:26.262359500Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:15:26.267070 containerd[1482]: time="2026-04-28T00:15:26.267022494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:15:26.269488 containerd[1482]: time="2026-04-28T00:15:26.268577962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.974437ms" Apr 28 00:15:26.271342 containerd[1482]: time="2026-04-28T00:15:26.271306044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.242172ms" Apr 28 00:15:26.273324 kubelet[2219]: E0428 00:15:26.273296 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://128.140.91.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-n-d098215774&limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 28 00:15:26.283813 containerd[1482]: time="2026-04-28T00:15:26.283770627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.623317ms" Apr 28 00:15:26.411952 containerd[1482]: time="2026-04-28T00:15:26.411729843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:26.411952 containerd[1482]: time="2026-04-28T00:15:26.411793351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:26.412684 containerd[1482]: time="2026-04-28T00:15:26.412570142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.413374 containerd[1482]: time="2026-04-28T00:15:26.412651394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:26.413708 containerd[1482]: time="2026-04-28T00:15:26.413671619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:26.413967 containerd[1482]: time="2026-04-28T00:15:26.413826736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.414548 containerd[1482]: time="2026-04-28T00:15:26.414507382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.414817 containerd[1482]: time="2026-04-28T00:15:26.414736405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.416833 containerd[1482]: time="2026-04-28T00:15:26.416670554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:26.416833 containerd[1482]: time="2026-04-28T00:15:26.416737354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:26.416833 containerd[1482]: time="2026-04-28T00:15:26.416754134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.418389 containerd[1482]: time="2026-04-28T00:15:26.418092663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:26.443511 systemd[1]: Started cri-containerd-e430df7aef99f81b0b6d53ebaf6243c18dc6a7f0dd4f217be27b23ec02b8113b.scope - libcontainer container e430df7aef99f81b0b6d53ebaf6243c18dc6a7f0dd4f217be27b23ec02b8113b. Apr 28 00:15:26.453929 systemd[1]: Started cri-containerd-2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358.scope - libcontainer container 2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358. Apr 28 00:15:26.458698 systemd[1]: Started cri-containerd-3f4227c7e5dcf2f5345faba3676466d64d0627a13a3ad5151fa16236cbad2614.scope - libcontainer container 3f4227c7e5dcf2f5345faba3676466d64d0627a13a3ad5151fa16236cbad2614. Apr 28 00:15:26.516526 containerd[1482]: time="2026-04-28T00:15:26.516308697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-n-d098215774,Uid:374fb25a9278e565a17773d2712d539e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f4227c7e5dcf2f5345faba3676466d64d0627a13a3ad5151fa16236cbad2614\"" Apr 28 00:15:26.517013 containerd[1482]: time="2026-04-28T00:15:26.516876457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-n-d098215774,Uid:adbdba3950c52d07a4ff8e4ca7a68667,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358\"" Apr 28 00:15:26.523884 containerd[1482]: time="2026-04-28T00:15:26.523671068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-n-d098215774,Uid:74bf90254a294eb34d1e184038c4e7e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e430df7aef99f81b0b6d53ebaf6243c18dc6a7f0dd4f217be27b23ec02b8113b\"" Apr 28 00:15:26.540565 containerd[1482]: time="2026-04-28T00:15:26.540469502Z" level=info msg="CreateContainer within sandbox \"2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:15:26.556007 containerd[1482]: time="2026-04-28T00:15:26.555455585Z" level=info msg="CreateContainer within sandbox \"3f4227c7e5dcf2f5345faba3676466d64d0627a13a3ad5151fa16236cbad2614\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:15:26.563111 containerd[1482]: time="2026-04-28T00:15:26.562963680Z" level=info msg="CreateContainer within sandbox \"e430df7aef99f81b0b6d53ebaf6243c18dc6a7f0dd4f217be27b23ec02b8113b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:15:26.588153 containerd[1482]: time="2026-04-28T00:15:26.588093086Z" level=info msg="CreateContainer within sandbox \"2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6\"" Apr 28 00:15:26.589903 containerd[1482]: time="2026-04-28T00:15:26.588931940Z" level=info msg="StartContainer for \"0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6\"" Apr 28 00:15:26.602920 containerd[1482]: time="2026-04-28T00:15:26.602435335Z" level=info msg="CreateContainer within sandbox \"3f4227c7e5dcf2f5345faba3676466d64d0627a13a3ad5151fa16236cbad2614\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5fddb0dcd3889d931cb984b9e9cee6f897a2d0ad7535dac2e3c13b76452f9eb6\"" Apr 28 00:15:26.603186 containerd[1482]: time="2026-04-28T00:15:26.603104780Z" level=info msg="StartContainer for \"5fddb0dcd3889d931cb984b9e9cee6f897a2d0ad7535dac2e3c13b76452f9eb6\"" Apr 28 00:15:26.605891 containerd[1482]: time="2026-04-28T00:15:26.605009664Z" level=info msg="CreateContainer within sandbox \"e430df7aef99f81b0b6d53ebaf6243c18dc6a7f0dd4f217be27b23ec02b8113b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3649af46219a6113ea1f166d621c70be7611edf8e4b71490fb9b37d259866021\"" Apr 28 00:15:26.606377 containerd[1482]: time="2026-04-28T00:15:26.606346869Z" level=info msg="StartContainer for \"3649af46219a6113ea1f166d621c70be7611edf8e4b71490fb9b37d259866021\"" Apr 28 00:15:26.623068 systemd[1]: Started cri-containerd-0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6.scope - libcontainer container 0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6. Apr 28 00:15:26.648079 systemd[1]: Started cri-containerd-5fddb0dcd3889d931cb984b9e9cee6f897a2d0ad7535dac2e3c13b76452f9eb6.scope - libcontainer container 5fddb0dcd3889d931cb984b9e9cee6f897a2d0ad7535dac2e3c13b76452f9eb6. Apr 28 00:15:26.656047 systemd[1]: Started cri-containerd-3649af46219a6113ea1f166d621c70be7611edf8e4b71490fb9b37d259866021.scope - libcontainer container 3649af46219a6113ea1f166d621c70be7611edf8e4b71490fb9b37d259866021. Apr 28 00:15:26.684107 containerd[1482]: time="2026-04-28T00:15:26.683157717Z" level=info msg="StartContainer for \"0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6\" returns successfully" Apr 28 00:15:26.706765 kubelet[2219]: E0428 00:15:26.706719 2219 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://128.140.91.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.91.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 28 00:15:26.713958 kubelet[2219]: E0428 00:15:26.713905 2219 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.91.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-d098215774?timeout=10s\": dial tcp 128.140.91.51:6443: connect: connection refused" interval="1.6s" Apr 28 00:15:26.751950 containerd[1482]: time="2026-04-28T00:15:26.751228082Z" level=info msg="StartContainer for \"5fddb0dcd3889d931cb984b9e9cee6f897a2d0ad7535dac2e3c13b76452f9eb6\" returns successfully" Apr 28 00:15:26.768904 containerd[1482]: time="2026-04-28T00:15:26.768559913Z" level=info msg="StartContainer for \"3649af46219a6113ea1f166d621c70be7611edf8e4b71490fb9b37d259866021\" returns successfully" Apr 28 00:15:26.887337 kubelet[2219]: I0428 00:15:26.887305 2219 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:27.355063 kubelet[2219]: E0428 00:15:27.355016 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:27.370886 kubelet[2219]: E0428 00:15:27.368408 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:27.376048 kubelet[2219]: E0428 00:15:27.376013 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:28.379646 kubelet[2219]: E0428 00:15:28.379607 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:28.380036 kubelet[2219]: E0428 00:15:28.379918 2219 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:28.749995 kubelet[2219]: E0428 00:15:28.749879 2219 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-n-d098215774\" not found" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:28.841269 kubelet[2219]: I0428 00:15:28.841215 2219 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:28.902895 kubelet[2219]: I0428 00:15:28.902831 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" Apr 28 00:15:28.918895 kubelet[2219]: E0428 00:15:28.918842 2219 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-n-d098215774\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" Apr 28 00:15:28.918895 kubelet[2219]: I0428 00:15:28.918886 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:28.924880 kubelet[2219]: E0428 00:15:28.923718 2219 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-n-d098215774\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:28.924880 kubelet[2219]: I0428 00:15:28.923751 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:28.927479 kubelet[2219]: E0428 00:15:28.927427 2219 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-n-d098215774\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:29.281044 kubelet[2219]: I0428 00:15:29.280954 2219 apiserver.go:52] "Watching apiserver" Apr 28 00:15:29.297794 kubelet[2219]: I0428 00:15:29.297683 2219 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:15:30.227449 systemd[1]: Started sshd@8-128.140.91.51:22-223.233.87.33:19777.service - OpenSSH per-connection server daemon (223.233.87.33:19777). Apr 28 00:15:30.782564 kubelet[2219]: I0428 00:15:30.782221 2219 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.054365 systemd[1]: Reloading requested from client PID 2502 ('systemctl') (unit session-7.scope)... Apr 28 00:15:31.054382 systemd[1]: Reloading... Apr 28 00:15:31.105337 sshd[2498]: Received disconnect from 223.233.87.33 port 19777:11: Bye Bye [preauth] Apr 28 00:15:31.105337 sshd[2498]: Disconnected from authenticating user root 223.233.87.33 port 19777 [preauth] Apr 28 00:15:31.167890 zram_generator::config[2542]: No configuration found. Apr 28 00:15:31.287515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:15:31.375005 systemd[1]: Reloading finished in 319 ms. Apr 28 00:15:31.404314 systemd[1]: sshd@8-128.140.91.51:22-223.233.87.33:19777.service: Deactivated successfully. Apr 28 00:15:31.418723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:31.438730 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:15:31.439149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:31.439303 systemd[1]: kubelet.service: Consumed 1.111s CPU time, 121.1M memory peak, 0B memory swap peak. Apr 28 00:15:31.446589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:15:31.602036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:15:31.609282 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:15:31.667732 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 28 00:15:31.667732 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:15:31.668336 kubelet[2593]: I0428 00:15:31.667375 2593 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 28 00:15:31.675233 kubelet[2593]: I0428 00:15:31.675204 2593 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 28 00:15:31.675671 kubelet[2593]: I0428 00:15:31.675362 2593 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:15:31.675671 kubelet[2593]: I0428 00:15:31.675396 2593 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:15:31.675671 kubelet[2593]: I0428 00:15:31.675402 2593 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:15:31.676661 kubelet[2593]: I0428 00:15:31.676551 2593 server.go:956] "Client rotation is on, will bootstrap in background" Apr 28 00:15:31.681866 kubelet[2593]: I0428 00:15:31.681818 2593 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 00:15:31.686350 kubelet[2593]: I0428 00:15:31.686322 2593 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:15:31.691662 kubelet[2593]: E0428 00:15:31.691532 2593 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:15:31.691662 kubelet[2593]: I0428 00:15:31.691608 2593 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:15:31.694385 kubelet[2593]: I0428 00:15:31.694362 2593 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:15:31.694604 kubelet[2593]: I0428 00:15:31.694578 2593 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:15:31.694784 kubelet[2593]: I0428 00:15:31.694606 2593 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-n-d098215774","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:15:31.694881 kubelet[2593]: I0428 00:15:31.694786 2593 topology_manager.go:138] "Creating topology manager with none policy" Apr 28 00:15:31.694881 kubelet[2593]: I0428 00:15:31.694798 2593 container_manager_linux.go:306] "Creating device plugin manager" Apr 28 00:15:31.694881 kubelet[2593]: I0428 00:15:31.694825 2593 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:15:31.695063 kubelet[2593]: I0428 00:15:31.695050 2593 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:15:31.695244 kubelet[2593]: I0428 00:15:31.695233 2593 kubelet.go:475] "Attempting to sync node with API server" Apr 28 00:15:31.695296 kubelet[2593]: I0428 00:15:31.695255 2593 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:15:31.695296 kubelet[2593]: I0428 00:15:31.695283 2593 kubelet.go:387] "Adding apiserver pod source" Apr 28 00:15:31.695357 kubelet[2593]: I0428 00:15:31.695313 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:15:31.697134 kubelet[2593]: I0428 00:15:31.697106 2593 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:15:31.697919 kubelet[2593]: I0428 00:15:31.697812 2593 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:15:31.697976 kubelet[2593]: I0428 00:15:31.697846 2593 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:15:31.700090 kubelet[2593]: I0428 00:15:31.700066 2593 server.go:1262] "Started kubelet" Apr 28 00:15:31.704993 kubelet[2593]: I0428 00:15:31.704948 2593 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:15:31.705134 kubelet[2593]: I0428 00:15:31.705121 2593 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:15:31.705463 kubelet[2593]: I0428 00:15:31.705430 2593 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:15:31.705572 kubelet[2593]: I0428 00:15:31.705551 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 28 00:15:31.705867 kubelet[2593]: I0428 00:15:31.705709 2593 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:15:31.708982 kubelet[2593]: I0428 00:15:31.708960 2593 server.go:310] "Adding debug handlers to kubelet server" Apr 28 00:15:31.713959 kubelet[2593]: I0428 00:15:31.713089 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:15:31.723645 kubelet[2593]: I0428 00:15:31.723027 2593 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 28 00:15:31.723645 kubelet[2593]: E0428 00:15:31.723245 2593 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-d098215774\" not found" Apr 28 00:15:31.725907 kubelet[2593]: I0428 00:15:31.724587 2593 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:15:31.725907 kubelet[2593]: I0428 00:15:31.724751 2593 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:15:31.742815 kubelet[2593]: I0428 00:15:31.742713 2593 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:15:31.746578 kubelet[2593]: I0428 00:15:31.746465 2593 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:15:31.746578 kubelet[2593]: I0428 00:15:31.746501 2593 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:15:31.751367 kubelet[2593]: I0428 00:15:31.751337 2593 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:15:31.753412 kubelet[2593]: I0428 00:15:31.752681 2593 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:15:31.753412 kubelet[2593]: I0428 00:15:31.752706 2593 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 28 00:15:31.753412 kubelet[2593]: I0428 00:15:31.752823 2593 kubelet.go:2428] "Starting kubelet main sync loop" Apr 28 00:15:31.753412 kubelet[2593]: E0428 00:15:31.752915 2593 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:15:31.760968 kubelet[2593]: E0428 00:15:31.760943 2593 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:15:31.797328 kubelet[2593]: I0428 00:15:31.797301 2593 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 28 00:15:31.797478 kubelet[2593]: I0428 00:15:31.797465 2593 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 28 00:15:31.797564 kubelet[2593]: I0428 00:15:31.797530 2593 state_mem.go:36] "Initialized new in-memory state store" Apr 28 00:15:31.797791 kubelet[2593]: I0428 00:15:31.797777 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 28 00:15:31.797948 kubelet[2593]: I0428 00:15:31.797921 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 28 00:15:31.798021 kubelet[2593]: I0428 00:15:31.798013 2593 policy_none.go:49] "None policy: Start" Apr 28 00:15:31.798076 kubelet[2593]: I0428 00:15:31.798067 2593 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:15:31.798131 kubelet[2593]: I0428 00:15:31.798121 2593 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:15:31.798616 kubelet[2593]: I0428 00:15:31.798590 2593 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 00:15:31.798695 kubelet[2593]: I0428 00:15:31.798687 2593 policy_none.go:47] "Start" Apr 28 00:15:31.806526 kubelet[2593]: E0428 00:15:31.806503 2593 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:15:31.806674 kubelet[2593]: I0428 00:15:31.806662 2593 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 28 00:15:31.806716 kubelet[2593]: I0428 00:15:31.806677 2593 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:15:31.807983 kubelet[2593]: I0428 00:15:31.807963 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 28 00:15:31.811093 kubelet[2593]: E0428 00:15:31.810681 2593 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:15:31.855479 kubelet[2593]: I0428 00:15:31.854598 2593 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.856894 kubelet[2593]: I0428 00:15:31.855188 2593 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.856894 kubelet[2593]: I0428 00:15:31.855458 2593 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.867165 kubelet[2593]: E0428 00:15:31.867114 2593 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-n-d098215774\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.910128 kubelet[2593]: I0428 00:15:31.910091 2593 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:31.920157 kubelet[2593]: I0428 00:15:31.919729 2593 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:31.921108 kubelet[2593]: I0428 00:15:31.920772 2593 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925567 kubelet[2593]: I0428 00:15:31.925419 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925567 kubelet[2593]: I0428 00:15:31.925454 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925567 kubelet[2593]: I0428 00:15:31.925474 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925567 kubelet[2593]: I0428 00:15:31.925511 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925567 kubelet[2593]: I0428 00:15:31.925562 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/374fb25a9278e565a17773d2712d539e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-n-d098215774\" (UID: \"374fb25a9278e565a17773d2712d539e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925780 kubelet[2593]: I0428 00:15:31.925599 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925780 kubelet[2593]: I0428 00:15:31.925614 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925780 kubelet[2593]: I0428 00:15:31.925639 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adbdba3950c52d07a4ff8e4ca7a68667-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-n-d098215774\" (UID: \"adbdba3950c52d07a4ff8e4ca7a68667\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" Apr 28 00:15:31.925780 kubelet[2593]: I0428 00:15:31.925656 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74bf90254a294eb34d1e184038c4e7e9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-n-d098215774\" (UID: \"74bf90254a294eb34d1e184038c4e7e9\") " pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" Apr 28 00:15:32.696538 kubelet[2593]: I0428 00:15:32.696076 2593 apiserver.go:52] "Watching apiserver" Apr 28 00:15:32.725380 kubelet[2593]: I0428 00:15:32.725260 2593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:15:32.780064 kubelet[2593]: I0428 00:15:32.780033 2593 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:32.787642 kubelet[2593]: E0428 00:15:32.787509 2593 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-n-d098215774\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" Apr 28 00:15:32.804832 kubelet[2593]: I0428 00:15:32.804461 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-n-d098215774" podStartSLOduration=1.804446107 podStartE2EDuration="1.804446107s" podCreationTimestamp="2026-04-28 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:15:32.80397707 +0000 UTC m=+1.188768875" watchObservedRunningTime="2026-04-28 00:15:32.804446107 +0000 UTC m=+1.189237912" Apr 28 00:15:32.839557 kubelet[2593]: I0428 00:15:32.839263 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-n-d098215774" podStartSLOduration=2.83924477 podStartE2EDuration="2.83924477s" podCreationTimestamp="2026-04-28 00:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:15:32.819113405 +0000 UTC m=+1.203905210" watchObservedRunningTime="2026-04-28 00:15:32.83924477 +0000 UTC m=+1.224036575" Apr 28 00:15:32.858530 kubelet[2593]: I0428 00:15:32.858289 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-d098215774" podStartSLOduration=1.8582730330000001 podStartE2EDuration="1.858273033s" podCreationTimestamp="2026-04-28 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:15:32.840009846 +0000 UTC m=+1.224801691" watchObservedRunningTime="2026-04-28 00:15:32.858273033 +0000 UTC m=+1.243064838" Apr 28 00:15:35.894192 kubelet[2593]: I0428 00:15:35.894119 2593 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 00:15:35.894922 containerd[1482]: time="2026-04-28T00:15:35.894826891Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 00:15:35.895608 kubelet[2593]: I0428 00:15:35.895334 2593 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 00:15:36.148351 systemd[1]: Created slice kubepods-besteffort-pod752bab89_7e89_4ee0_8d2a_2858e4080269.slice - libcontainer container kubepods-besteffort-pod752bab89_7e89_4ee0_8d2a_2858e4080269.slice. Apr 28 00:15:36.154733 kubelet[2593]: I0428 00:15:36.154532 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-proxy\") pod \"kube-proxy-7rnxg\" (UID: \"752bab89-7e89-4ee0-8d2a-2858e4080269\") " pod="kube-system/kube-proxy-7rnxg" Apr 28 00:15:36.154733 kubelet[2593]: I0428 00:15:36.154575 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/752bab89-7e89-4ee0-8d2a-2858e4080269-lib-modules\") pod \"kube-proxy-7rnxg\" (UID: \"752bab89-7e89-4ee0-8d2a-2858e4080269\") " pod="kube-system/kube-proxy-7rnxg" Apr 28 00:15:36.154733 kubelet[2593]: I0428 00:15:36.154596 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/752bab89-7e89-4ee0-8d2a-2858e4080269-xtables-lock\") pod \"kube-proxy-7rnxg\" (UID: \"752bab89-7e89-4ee0-8d2a-2858e4080269\") " pod="kube-system/kube-proxy-7rnxg" Apr 28 00:15:36.154733 kubelet[2593]: I0428 00:15:36.154613 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658x5\" (UniqueName: \"kubernetes.io/projected/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-api-access-658x5\") pod \"kube-proxy-7rnxg\" (UID: \"752bab89-7e89-4ee0-8d2a-2858e4080269\") " pod="kube-system/kube-proxy-7rnxg" Apr 28 00:15:36.264874 kubelet[2593]: E0428 00:15:36.264723 2593 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 00:15:36.264874 kubelet[2593]: E0428 00:15:36.264769 2593 projected.go:196] Error preparing data for projected volume kube-api-access-658x5 for pod kube-system/kube-proxy-7rnxg: configmap "kube-root-ca.crt" not found Apr 28 00:15:36.265209 kubelet[2593]: E0428 00:15:36.265059 2593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-api-access-658x5 podName:752bab89-7e89-4ee0-8d2a-2858e4080269 nodeName:}" failed. No retries permitted until 2026-04-28 00:15:36.764818967 +0000 UTC m=+5.149610772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-658x5" (UniqueName: "kubernetes.io/projected/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-api-access-658x5") pod "kube-proxy-7rnxg" (UID: "752bab89-7e89-4ee0-8d2a-2858e4080269") : configmap "kube-root-ca.crt" not found Apr 28 00:15:36.860884 kubelet[2593]: E0428 00:15:36.859239 2593 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 00:15:36.860884 kubelet[2593]: E0428 00:15:36.859282 2593 projected.go:196] Error preparing data for projected volume kube-api-access-658x5 for pod kube-system/kube-proxy-7rnxg: configmap "kube-root-ca.crt" not found Apr 28 00:15:36.860884 kubelet[2593]: E0428 00:15:36.859351 2593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-api-access-658x5 podName:752bab89-7e89-4ee0-8d2a-2858e4080269 nodeName:}" failed. No retries permitted until 2026-04-28 00:15:37.859330263 +0000 UTC m=+6.244122068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-658x5" (UniqueName: "kubernetes.io/projected/752bab89-7e89-4ee0-8d2a-2858e4080269-kube-api-access-658x5") pod "kube-proxy-7rnxg" (UID: "752bab89-7e89-4ee0-8d2a-2858e4080269") : configmap "kube-root-ca.crt" not found Apr 28 00:15:37.170479 systemd[1]: Created slice kubepods-besteffort-pod48bac297_796c_4073_994d_cb9566669ab0.slice - libcontainer container kubepods-besteffort-pod48bac297_796c_4073_994d_cb9566669ab0.slice. Apr 28 00:15:37.260972 kubelet[2593]: I0428 00:15:37.260791 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48bac297-796c-4073-994d-cb9566669ab0-var-lib-calico\") pod \"tigera-operator-6fb8d665dd-f4sp7\" (UID: \"48bac297-796c-4073-994d-cb9566669ab0\") " pod="tigera-operator/tigera-operator-6fb8d665dd-f4sp7" Apr 28 00:15:37.260972 kubelet[2593]: I0428 00:15:37.260897 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7sp\" (UniqueName: \"kubernetes.io/projected/48bac297-796c-4073-994d-cb9566669ab0-kube-api-access-rc7sp\") pod \"tigera-operator-6fb8d665dd-f4sp7\" (UID: \"48bac297-796c-4073-994d-cb9566669ab0\") " pod="tigera-operator/tigera-operator-6fb8d665dd-f4sp7" Apr 28 00:15:37.479956 containerd[1482]: time="2026-04-28T00:15:37.479774731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-f4sp7,Uid:48bac297-796c-4073-994d-cb9566669ab0,Namespace:tigera-operator,Attempt:0,}" Apr 28 00:15:37.507134 containerd[1482]: time="2026-04-28T00:15:37.506789495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:37.507528 containerd[1482]: time="2026-04-28T00:15:37.507153036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:37.507528 containerd[1482]: time="2026-04-28T00:15:37.507172412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:37.507528 containerd[1482]: time="2026-04-28T00:15:37.507323217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:37.535301 systemd[1]: Started cri-containerd-cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9.scope - libcontainer container cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9. Apr 28 00:15:37.571108 containerd[1482]: time="2026-04-28T00:15:37.570814393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-f4sp7,Uid:48bac297-796c-4073-994d-cb9566669ab0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9\"" Apr 28 00:15:37.573429 containerd[1482]: time="2026-04-28T00:15:37.573058934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 28 00:15:37.959905 containerd[1482]: time="2026-04-28T00:15:37.959837502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rnxg,Uid:752bab89-7e89-4ee0-8d2a-2858e4080269,Namespace:kube-system,Attempt:0,}" Apr 28 00:15:37.988131 containerd[1482]: time="2026-04-28T00:15:37.987918270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:37.988131 containerd[1482]: time="2026-04-28T00:15:37.987983604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:37.988131 containerd[1482]: time="2026-04-28T00:15:37.988042133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:37.988629 containerd[1482]: time="2026-04-28T00:15:37.988193818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:38.007116 systemd[1]: Started cri-containerd-872f46eabf97bc2efffff068e6d4ab2e454af243c5ca44d7786c284c35caa208.scope - libcontainer container 872f46eabf97bc2efffff068e6d4ab2e454af243c5ca44d7786c284c35caa208. Apr 28 00:15:38.034314 containerd[1482]: time="2026-04-28T00:15:38.034253919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rnxg,Uid:752bab89-7e89-4ee0-8d2a-2858e4080269,Namespace:kube-system,Attempt:0,} returns sandbox id \"872f46eabf97bc2efffff068e6d4ab2e454af243c5ca44d7786c284c35caa208\"" Apr 28 00:15:38.042064 containerd[1482]: time="2026-04-28T00:15:38.041996698Z" level=info msg="CreateContainer within sandbox \"872f46eabf97bc2efffff068e6d4ab2e454af243c5ca44d7786c284c35caa208\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 00:15:38.058592 containerd[1482]: time="2026-04-28T00:15:38.058382310Z" level=info msg="CreateContainer within sandbox \"872f46eabf97bc2efffff068e6d4ab2e454af243c5ca44d7786c284c35caa208\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a7cc55ffd774637ff3211666ddc61ea9d366f0f753d8c941fd04f1611f9d86e\"" Apr 28 00:15:38.060033 containerd[1482]: time="2026-04-28T00:15:38.059871591Z" level=info msg="StartContainer for \"7a7cc55ffd774637ff3211666ddc61ea9d366f0f753d8c941fd04f1611f9d86e\"" Apr 28 00:15:38.086111 systemd[1]: Started cri-containerd-7a7cc55ffd774637ff3211666ddc61ea9d366f0f753d8c941fd04f1611f9d86e.scope - libcontainer container 7a7cc55ffd774637ff3211666ddc61ea9d366f0f753d8c941fd04f1611f9d86e. Apr 28 00:15:38.120716 containerd[1482]: time="2026-04-28T00:15:38.120586654Z" level=info msg="StartContainer for \"7a7cc55ffd774637ff3211666ddc61ea9d366f0f753d8c941fd04f1611f9d86e\" returns successfully" Apr 28 00:15:39.265200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659628538.mount: Deactivated successfully. Apr 28 00:15:39.879390 containerd[1482]: time="2026-04-28T00:15:39.879314002Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:39.881927 containerd[1482]: time="2026-04-28T00:15:39.881667777Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=24868969" Apr 28 00:15:39.883204 containerd[1482]: time="2026-04-28T00:15:39.883097645Z" level=info msg="ImageCreate event name:\"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:39.886411 containerd[1482]: time="2026-04-28T00:15:39.886369763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:39.887663 containerd[1482]: time="2026-04-28T00:15:39.887399257Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"24864964\" in 2.314277713s" Apr 28 00:15:39.887663 containerd[1482]: time="2026-04-28T00:15:39.887451170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\"" Apr 28 00:15:39.895778 containerd[1482]: time="2026-04-28T00:15:39.895532902Z" level=info msg="CreateContainer within sandbox \"cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 28 00:15:39.924055 containerd[1482]: time="2026-04-28T00:15:39.924004945Z" level=info msg="CreateContainer within sandbox \"cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000\"" Apr 28 00:15:39.925072 containerd[1482]: time="2026-04-28T00:15:39.925030796Z" level=info msg="StartContainer for \"ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000\"" Apr 28 00:15:39.963531 systemd[1]: Started cri-containerd-ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000.scope - libcontainer container ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000. Apr 28 00:15:39.992790 containerd[1482]: time="2026-04-28T00:15:39.992731471Z" level=info msg="StartContainer for \"ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000\" returns successfully" Apr 28 00:15:40.815577 kubelet[2593]: I0428 00:15:40.815470 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7rnxg" podStartSLOduration=4.815433923 podStartE2EDuration="4.815433923s" podCreationTimestamp="2026-04-28 00:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:15:38.809038972 +0000 UTC m=+7.193830777" watchObservedRunningTime="2026-04-28 00:15:40.815433923 +0000 UTC m=+9.200225728" Apr 28 00:15:40.816240 kubelet[2593]: I0428 00:15:40.815672 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6fb8d665dd-f4sp7" podStartSLOduration=1.498910826 podStartE2EDuration="3.815661009s" podCreationTimestamp="2026-04-28 00:15:37 +0000 UTC" firstStartedPulling="2026-04-28 00:15:37.572597151 +0000 UTC m=+5.957388916" lastFinishedPulling="2026-04-28 00:15:39.889347334 +0000 UTC m=+8.274139099" observedRunningTime="2026-04-28 00:15:40.81526667 +0000 UTC m=+9.200058475" watchObservedRunningTime="2026-04-28 00:15:40.815661009 +0000 UTC m=+9.200452854" Apr 28 00:15:46.164757 sudo[1702]: pam_unix(sudo:session): session closed for user root Apr 28 00:15:46.184568 sshd[1699]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:46.190318 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Apr 28 00:15:46.190486 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 00:15:46.190665 systemd[1]: session-7.scope: Consumed 6.741s CPU time, 152.4M memory peak, 0B memory swap peak. Apr 28 00:15:46.191818 systemd[1]: sshd@7-128.140.91.51:22-50.85.169.122:57464.service: Deactivated successfully. Apr 28 00:15:46.202323 systemd-logind[1460]: Removed session 7. Apr 28 00:15:51.001890 systemd[1]: Created slice kubepods-besteffort-pod66f3f639_8485_49d2_9cdf_5eda5ca565e9.slice - libcontainer container kubepods-besteffort-pod66f3f639_8485_49d2_9cdf_5eda5ca565e9.slice. Apr 28 00:15:51.057125 kubelet[2593]: I0428 00:15:51.057059 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66f3f639-8485-49d2-9cdf-5eda5ca565e9-typha-certs\") pod \"calico-typha-598d6f7fcc-zgnwk\" (UID: \"66f3f639-8485-49d2-9cdf-5eda5ca565e9\") " pod="calico-system/calico-typha-598d6f7fcc-zgnwk" Apr 28 00:15:51.057125 kubelet[2593]: I0428 00:15:51.057123 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f3f639-8485-49d2-9cdf-5eda5ca565e9-tigera-ca-bundle\") pod \"calico-typha-598d6f7fcc-zgnwk\" (UID: \"66f3f639-8485-49d2-9cdf-5eda5ca565e9\") " pod="calico-system/calico-typha-598d6f7fcc-zgnwk" Apr 28 00:15:51.057575 kubelet[2593]: I0428 00:15:51.057151 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckshd\" (UniqueName: \"kubernetes.io/projected/66f3f639-8485-49d2-9cdf-5eda5ca565e9-kube-api-access-ckshd\") pod \"calico-typha-598d6f7fcc-zgnwk\" (UID: \"66f3f639-8485-49d2-9cdf-5eda5ca565e9\") " pod="calico-system/calico-typha-598d6f7fcc-zgnwk" Apr 28 00:15:51.162224 systemd[1]: Created slice kubepods-besteffort-pod882a6cc6_ea1a_40b9_85d2_e2ba98a262ae.slice - libcontainer container kubepods-besteffort-pod882a6cc6_ea1a_40b9_85d2_e2ba98a262ae.slice. Apr 28 00:15:51.259367 kubelet[2593]: I0428 00:15:51.258070 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-flexvol-driver-host\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259367 kubelet[2593]: I0428 00:15:51.258111 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-sys-fs\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259367 kubelet[2593]: I0428 00:15:51.258132 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-bpffs\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259367 kubelet[2593]: I0428 00:15:51.258148 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-cni-bin-dir\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259367 kubelet[2593]: I0428 00:15:51.258168 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-cni-log-dir\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259574 kubelet[2593]: I0428 00:15:51.258233 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-policysync\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259574 kubelet[2593]: I0428 00:15:51.258251 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-var-lib-calico\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259574 kubelet[2593]: I0428 00:15:51.258269 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65r5\" (UniqueName: \"kubernetes.io/projected/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-kube-api-access-q65r5\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259574 kubelet[2593]: I0428 00:15:51.258285 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-cni-net-dir\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259574 kubelet[2593]: I0428 00:15:51.258311 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-nodeproc\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259688 kubelet[2593]: I0428 00:15:51.258327 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-xtables-lock\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259688 kubelet[2593]: I0428 00:15:51.258345 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-var-run-calico\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259688 kubelet[2593]: I0428 00:15:51.258358 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-node-certs\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259688 kubelet[2593]: I0428 00:15:51.258372 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-lib-modules\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.259688 kubelet[2593]: I0428 00:15:51.258388 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882a6cc6-ea1a-40b9-85d2-e2ba98a262ae-tigera-ca-bundle\") pod \"calico-node-mklj8\" (UID: \"882a6cc6-ea1a-40b9-85d2-e2ba98a262ae\") " pod="calico-system/calico-node-mklj8" Apr 28 00:15:51.271695 kubelet[2593]: E0428 00:15:51.270352 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:15:51.312753 containerd[1482]: time="2026-04-28T00:15:51.312074650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598d6f7fcc-zgnwk,Uid:66f3f639-8485-49d2-9cdf-5eda5ca565e9,Namespace:calico-system,Attempt:0,}" Apr 28 00:15:51.359467 kubelet[2593]: I0428 00:15:51.359421 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3-socket-dir\") pod \"csi-node-driver-w9gwt\" (UID: \"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3\") " pod="calico-system/csi-node-driver-w9gwt" Apr 28 00:15:51.360774 kubelet[2593]: I0428 00:15:51.360636 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3-registration-dir\") pod \"csi-node-driver-w9gwt\" (UID: \"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3\") " pod="calico-system/csi-node-driver-w9gwt" Apr 28 00:15:51.362010 kubelet[2593]: I0428 00:15:51.361986 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lf24\" (UniqueName: \"kubernetes.io/projected/de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3-kube-api-access-2lf24\") pod \"csi-node-driver-w9gwt\" (UID: \"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3\") " pod="calico-system/csi-node-driver-w9gwt" Apr 28 00:15:51.362345 containerd[1482]: time="2026-04-28T00:15:51.360045096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:51.362345 containerd[1482]: time="2026-04-28T00:15:51.360103318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:51.362345 containerd[1482]: time="2026-04-28T00:15:51.360129768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:51.363519 kubelet[2593]: I0428 00:15:51.362801 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3-kubelet-dir\") pod \"csi-node-driver-w9gwt\" (UID: \"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3\") " pod="calico-system/csi-node-driver-w9gwt" Apr 28 00:15:51.363519 kubelet[2593]: I0428 00:15:51.362842 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3-varrun\") pod \"csi-node-driver-w9gwt\" (UID: \"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3\") " pod="calico-system/csi-node-driver-w9gwt" Apr 28 00:15:51.366030 containerd[1482]: time="2026-04-28T00:15:51.365609033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:51.366831 kubelet[2593]: E0428 00:15:51.366717 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.368950 kubelet[2593]: W0428 00:15:51.368315 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.368950 kubelet[2593]: E0428 00:15:51.368361 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.380118 kubelet[2593]: E0428 00:15:51.378786 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.380118 kubelet[2593]: W0428 00:15:51.379344 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.380592 kubelet[2593]: E0428 00:15:51.379375 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.396477 kubelet[2593]: E0428 00:15:51.396347 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.396477 kubelet[2593]: W0428 00:15:51.396371 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.396477 kubelet[2593]: E0428 00:15:51.396390 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.400642 systemd[1]: Started cri-containerd-4b1fc19008dbc3f1cdc177dbf5759f827fdcb1e913421be414882d99d05586fd.scope - libcontainer container 4b1fc19008dbc3f1cdc177dbf5759f827fdcb1e913421be414882d99d05586fd. Apr 28 00:15:51.444019 containerd[1482]: time="2026-04-28T00:15:51.443918958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598d6f7fcc-zgnwk,Uid:66f3f639-8485-49d2-9cdf-5eda5ca565e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b1fc19008dbc3f1cdc177dbf5759f827fdcb1e913421be414882d99d05586fd\"" Apr 28 00:15:51.446190 containerd[1482]: time="2026-04-28T00:15:51.446045960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 28 00:15:51.464244 kubelet[2593]: E0428 00:15:51.464205 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.464244 kubelet[2593]: W0428 00:15:51.464235 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.464525 kubelet[2593]: E0428 00:15:51.464360 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.464901 kubelet[2593]: E0428 00:15:51.464884 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.465014 kubelet[2593]: W0428 00:15:51.464900 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.465054 kubelet[2593]: E0428 00:15:51.465016 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.465359 kubelet[2593]: E0428 00:15:51.465341 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.465359 kubelet[2593]: W0428 00:15:51.465357 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.465455 kubelet[2593]: E0428 00:15:51.465370 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.465926 kubelet[2593]: E0428 00:15:51.465907 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.465926 kubelet[2593]: W0428 00:15:51.465923 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.466098 kubelet[2593]: E0428 00:15:51.466030 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.466434 kubelet[2593]: E0428 00:15:51.466396 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.466530 kubelet[2593]: W0428 00:15:51.466414 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.466660 kubelet[2593]: E0428 00:15:51.466532 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.466963 kubelet[2593]: E0428 00:15:51.466845 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.466963 kubelet[2593]: W0428 00:15:51.466960 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.467504 kubelet[2593]: E0428 00:15:51.466972 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.467504 kubelet[2593]: E0428 00:15:51.467335 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.467504 kubelet[2593]: W0428 00:15:51.467347 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.467504 kubelet[2593]: E0428 00:15:51.467358 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.467766 kubelet[2593]: E0428 00:15:51.467749 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.467766 kubelet[2593]: W0428 00:15:51.467764 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.467945 kubelet[2593]: E0428 00:15:51.467774 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.469104 kubelet[2593]: E0428 00:15:51.469076 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.469104 kubelet[2593]: W0428 00:15:51.469099 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.469211 kubelet[2593]: E0428 00:15:51.469114 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.470395 kubelet[2593]: E0428 00:15:51.470369 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.470395 kubelet[2593]: W0428 00:15:51.470389 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.470485 kubelet[2593]: E0428 00:15:51.470405 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.471093 kubelet[2593]: E0428 00:15:51.470659 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.471093 kubelet[2593]: W0428 00:15:51.470674 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.471093 kubelet[2593]: E0428 00:15:51.470684 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.471093 kubelet[2593]: E0428 00:15:51.470819 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.471093 kubelet[2593]: W0428 00:15:51.470826 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.471093 kubelet[2593]: E0428 00:15:51.470834 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.471251 kubelet[2593]: E0428 00:15:51.471160 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.471251 kubelet[2593]: W0428 00:15:51.471170 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.471251 kubelet[2593]: E0428 00:15:51.471179 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.471617 kubelet[2593]: E0428 00:15:51.471577 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.471617 kubelet[2593]: W0428 00:15:51.471596 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.471617 kubelet[2593]: E0428 00:15:51.471609 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.472065 kubelet[2593]: E0428 00:15:51.472038 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.472065 kubelet[2593]: W0428 00:15:51.472054 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.472065 kubelet[2593]: E0428 00:15:51.472064 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.475183 containerd[1482]: time="2026-04-28T00:15:51.473477783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mklj8,Uid:882a6cc6-ea1a-40b9-85d2-e2ba98a262ae,Namespace:calico-system,Attempt:0,}" Apr 28 00:15:51.475546 kubelet[2593]: E0428 00:15:51.475500 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.475546 kubelet[2593]: W0428 00:15:51.475530 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.475546 kubelet[2593]: E0428 00:15:51.475548 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.476408 kubelet[2593]: E0428 00:15:51.476353 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.476408 kubelet[2593]: W0428 00:15:51.476369 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.476408 kubelet[2593]: E0428 00:15:51.476382 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.477586 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.478575 kubelet[2593]: W0428 00:15:51.477608 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.477621 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.478000 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.478575 kubelet[2593]: W0428 00:15:51.478018 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.478030 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.478423 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.478575 kubelet[2593]: W0428 00:15:51.478438 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.478575 kubelet[2593]: E0428 00:15:51.478449 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.479632 kubelet[2593]: E0428 00:15:51.479486 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.479632 kubelet[2593]: W0428 00:15:51.479549 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.479632 kubelet[2593]: E0428 00:15:51.479564 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.481898 kubelet[2593]: E0428 00:15:51.480653 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.481898 kubelet[2593]: W0428 00:15:51.480671 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.481898 kubelet[2593]: E0428 00:15:51.480683 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.481898 kubelet[2593]: E0428 00:15:51.480962 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.481898 kubelet[2593]: W0428 00:15:51.481006 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.481898 kubelet[2593]: E0428 00:15:51.481017 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.482321 kubelet[2593]: E0428 00:15:51.482165 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.482321 kubelet[2593]: W0428 00:15:51.482177 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.482321 kubelet[2593]: E0428 00:15:51.482189 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.483887 kubelet[2593]: E0428 00:15:51.483090 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.483887 kubelet[2593]: W0428 00:15:51.483110 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.483887 kubelet[2593]: E0428 00:15:51.483122 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.491633 kubelet[2593]: E0428 00:15:51.491453 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:51.491633 kubelet[2593]: W0428 00:15:51.491482 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:51.491633 kubelet[2593]: E0428 00:15:51.491504 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:51.502183 containerd[1482]: time="2026-04-28T00:15:51.501651885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:15:51.502361 containerd[1482]: time="2026-04-28T00:15:51.502214697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:15:51.502361 containerd[1482]: time="2026-04-28T00:15:51.502246830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:51.502512 containerd[1482]: time="2026-04-28T00:15:51.502445464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:15:51.522232 systemd[1]: Started cri-containerd-e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741.scope - libcontainer container e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741. Apr 28 00:15:51.555241 containerd[1482]: time="2026-04-28T00:15:51.555079589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mklj8,Uid:882a6cc6-ea1a-40b9-85d2-e2ba98a262ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\"" Apr 28 00:15:51.607766 systemd[1]: Started sshd@9-128.140.91.51:22-103.62.153.11:62957.service - OpenSSH per-connection server daemon (103.62.153.11:62957). Apr 28 00:15:52.754044 kubelet[2593]: E0428 00:15:52.753260 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:15:53.228170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254444418.mount: Deactivated successfully. Apr 28 00:15:53.308569 sshd[3099]: Received disconnect from 103.62.153.11 port 62957:11: Bye Bye [preauth] Apr 28 00:15:53.308569 sshd[3099]: Disconnected from authenticating user root 103.62.153.11 port 62957 [preauth] Apr 28 00:15:53.311772 systemd[1]: sshd@9-128.140.91.51:22-103.62.153.11:62957.service: Deactivated successfully. Apr 28 00:15:54.157786 containerd[1482]: time="2026-04-28T00:15:54.157696635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:54.160225 containerd[1482]: time="2026-04-28T00:15:54.160039871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32841445" Apr 28 00:15:54.160738 containerd[1482]: time="2026-04-28T00:15:54.160665073Z" level=info msg="ImageCreate event name:\"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:54.164610 containerd[1482]: time="2026-04-28T00:15:54.164555129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:54.165894 containerd[1482]: time="2026-04-28T00:15:54.165080899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"32841299\" in 2.718960711s" Apr 28 00:15:54.165894 containerd[1482]: time="2026-04-28T00:15:54.165115270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\"" Apr 28 00:15:54.168114 containerd[1482]: time="2026-04-28T00:15:54.167602753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 28 00:15:54.187885 containerd[1482]: time="2026-04-28T00:15:54.187816158Z" level=info msg="CreateContainer within sandbox \"4b1fc19008dbc3f1cdc177dbf5759f827fdcb1e913421be414882d99d05586fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 28 00:15:54.207514 containerd[1482]: time="2026-04-28T00:15:54.207446494Z" level=info msg="CreateContainer within sandbox \"4b1fc19008dbc3f1cdc177dbf5759f827fdcb1e913421be414882d99d05586fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5972a0f369fbbb6ee959924c4fd4acf1cc21525a5678a4e42dfe6b60c495bf58\"" Apr 28 00:15:54.208717 containerd[1482]: time="2026-04-28T00:15:54.208571657Z" level=info msg="StartContainer for \"5972a0f369fbbb6ee959924c4fd4acf1cc21525a5678a4e42dfe6b60c495bf58\"" Apr 28 00:15:54.249193 systemd[1]: Started cri-containerd-5972a0f369fbbb6ee959924c4fd4acf1cc21525a5678a4e42dfe6b60c495bf58.scope - libcontainer container 5972a0f369fbbb6ee959924c4fd4acf1cc21525a5678a4e42dfe6b60c495bf58. Apr 28 00:15:54.295606 containerd[1482]: time="2026-04-28T00:15:54.295518244Z" level=info msg="StartContainer for \"5972a0f369fbbb6ee959924c4fd4acf1cc21525a5678a4e42dfe6b60c495bf58\" returns successfully" Apr 28 00:15:54.754413 kubelet[2593]: E0428 00:15:54.754201 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:15:54.877647 kubelet[2593]: E0428 00:15:54.877544 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.877647 kubelet[2593]: W0428 00:15:54.877586 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.877647 kubelet[2593]: E0428 00:15:54.877650 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.878239 kubelet[2593]: E0428 00:15:54.878199 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.878312 kubelet[2593]: W0428 00:15:54.878218 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.878312 kubelet[2593]: E0428 00:15:54.878281 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.878519 kubelet[2593]: E0428 00:15:54.878500 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.878519 kubelet[2593]: W0428 00:15:54.878515 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.878609 kubelet[2593]: E0428 00:15:54.878525 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.878732 kubelet[2593]: E0428 00:15:54.878714 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.878732 kubelet[2593]: W0428 00:15:54.878722 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.878732 kubelet[2593]: E0428 00:15:54.878730 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.878945 kubelet[2593]: E0428 00:15:54.878920 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.878945 kubelet[2593]: W0428 00:15:54.878944 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.879274 kubelet[2593]: E0428 00:15:54.878954 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.879274 kubelet[2593]: E0428 00:15:54.879160 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.879274 kubelet[2593]: W0428 00:15:54.879186 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.879274 kubelet[2593]: E0428 00:15:54.879197 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.879489 kubelet[2593]: E0428 00:15:54.879391 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.879489 kubelet[2593]: W0428 00:15:54.879400 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.879489 kubelet[2593]: E0428 00:15:54.879444 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.879669 kubelet[2593]: E0428 00:15:54.879647 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.879716 kubelet[2593]: W0428 00:15:54.879662 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.879716 kubelet[2593]: E0428 00:15:54.879690 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.879942 kubelet[2593]: E0428 00:15:54.879926 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.880023 kubelet[2593]: W0428 00:15:54.879941 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.880023 kubelet[2593]: E0428 00:15:54.879966 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.880183 kubelet[2593]: E0428 00:15:54.880162 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.880248 kubelet[2593]: W0428 00:15:54.880199 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.880248 kubelet[2593]: E0428 00:15:54.880213 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.880392 kubelet[2593]: E0428 00:15:54.880375 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.880392 kubelet[2593]: W0428 00:15:54.880390 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.880505 kubelet[2593]: E0428 00:15:54.880400 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.880660 kubelet[2593]: E0428 00:15:54.880642 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.880660 kubelet[2593]: W0428 00:15:54.880658 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.880749 kubelet[2593]: E0428 00:15:54.880668 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.880842 kubelet[2593]: E0428 00:15:54.880829 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.880842 kubelet[2593]: W0428 00:15:54.880840 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.880944 kubelet[2593]: E0428 00:15:54.880870 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.881040 kubelet[2593]: E0428 00:15:54.881023 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.881040 kubelet[2593]: W0428 00:15:54.881038 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.881134 kubelet[2593]: E0428 00:15:54.881047 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.881220 kubelet[2593]: E0428 00:15:54.881204 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.881220 kubelet[2593]: W0428 00:15:54.881217 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.881298 kubelet[2593]: E0428 00:15:54.881226 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.891824 kubelet[2593]: E0428 00:15:54.891771 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.891824 kubelet[2593]: W0428 00:15:54.891796 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.892157 kubelet[2593]: E0428 00:15:54.891922 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892258 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.894269 kubelet[2593]: W0428 00:15:54.892276 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892287 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892540 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.894269 kubelet[2593]: W0428 00:15:54.892549 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892574 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892827 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.894269 kubelet[2593]: W0428 00:15:54.892836 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.892845 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.894269 kubelet[2593]: E0428 00:15:54.893110 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.895573 kubelet[2593]: W0428 00:15:54.893121 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.893131 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.893317 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.895573 kubelet[2593]: W0428 00:15:54.893326 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.893373 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.893633 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.895573 kubelet[2593]: W0428 00:15:54.893643 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.893653 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.895573 kubelet[2593]: E0428 00:15:54.894194 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.895573 kubelet[2593]: W0428 00:15:54.894206 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.895798 kubelet[2593]: E0428 00:15:54.894234 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.895798 kubelet[2593]: E0428 00:15:54.894489 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.895798 kubelet[2593]: W0428 00:15:54.894500 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.895798 kubelet[2593]: E0428 00:15:54.894510 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.897691 kubelet[2593]: E0428 00:15:54.897665 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.897691 kubelet[2593]: W0428 00:15:54.897688 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.897810 kubelet[2593]: E0428 00:15:54.897707 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.898967 kubelet[2593]: E0428 00:15:54.898917 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.898967 kubelet[2593]: W0428 00:15:54.898940 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.898967 kubelet[2593]: E0428 00:15:54.898955 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.899224 kubelet[2593]: E0428 00:15:54.899207 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.899263 kubelet[2593]: W0428 00:15:54.899225 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.899263 kubelet[2593]: E0428 00:15:54.899237 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.899505 kubelet[2593]: E0428 00:15:54.899484 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.899505 kubelet[2593]: W0428 00:15:54.899501 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.899592 kubelet[2593]: E0428 00:15:54.899512 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.899759 kubelet[2593]: E0428 00:15:54.899743 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.899805 kubelet[2593]: W0428 00:15:54.899760 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.899805 kubelet[2593]: E0428 00:15:54.899771 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.900105 kubelet[2593]: E0428 00:15:54.900081 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.900105 kubelet[2593]: W0428 00:15:54.900105 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.900185 kubelet[2593]: E0428 00:15:54.900117 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.901028 kubelet[2593]: E0428 00:15:54.901009 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.901028 kubelet[2593]: W0428 00:15:54.901025 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.901028 kubelet[2593]: E0428 00:15:54.901037 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.901315 kubelet[2593]: E0428 00:15:54.901261 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.901315 kubelet[2593]: W0428 00:15:54.901274 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.901315 kubelet[2593]: E0428 00:15:54.901282 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:54.901666 kubelet[2593]: E0428 00:15:54.901651 2593 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:15:54.901666 kubelet[2593]: W0428 00:15:54.901664 2593 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:15:54.901734 kubelet[2593]: E0428 00:15:54.901674 2593 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:15:55.694897 containerd[1482]: time="2026-04-28T00:15:55.694723032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:55.696969 containerd[1482]: time="2026-04-28T00:15:55.696902822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4404646" Apr 28 00:15:55.698260 containerd[1482]: time="2026-04-28T00:15:55.698160728Z" level=info msg="ImageCreate event name:\"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:55.701568 containerd[1482]: time="2026-04-28T00:15:55.701244354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:15:55.702231 containerd[1482]: time="2026-04-28T00:15:55.702183602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"6980245\" in 1.534511427s" Apr 28 00:15:55.702231 containerd[1482]: time="2026-04-28T00:15:55.702228056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\"" Apr 28 00:15:55.707520 containerd[1482]: time="2026-04-28T00:15:55.707468264Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 28 00:15:55.723693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068317096.mount: Deactivated successfully. Apr 28 00:15:55.729833 containerd[1482]: time="2026-04-28T00:15:55.729778872Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f\"" Apr 28 00:15:55.731950 containerd[1482]: time="2026-04-28T00:15:55.730468124Z" level=info msg="StartContainer for \"b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f\"" Apr 28 00:15:55.769524 systemd[1]: Started cri-containerd-b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f.scope - libcontainer container b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f. Apr 28 00:15:55.804571 containerd[1482]: time="2026-04-28T00:15:55.804461835Z" level=info msg="StartContainer for \"b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f\" returns successfully" Apr 28 00:15:55.818776 systemd[1]: cri-containerd-b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f.scope: Deactivated successfully. Apr 28 00:15:55.846708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f-rootfs.mount: Deactivated successfully. Apr 28 00:15:55.853907 kubelet[2593]: I0428 00:15:55.853630 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:15:55.880831 kubelet[2593]: I0428 00:15:55.880754 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-598d6f7fcc-zgnwk" podStartSLOduration=3.158676933 podStartE2EDuration="5.880736046s" podCreationTimestamp="2026-04-28 00:15:50 +0000 UTC" firstStartedPulling="2026-04-28 00:15:51.445340214 +0000 UTC m=+19.830131979" lastFinishedPulling="2026-04-28 00:15:54.167399287 +0000 UTC m=+22.552191092" observedRunningTime="2026-04-28 00:15:54.864524758 +0000 UTC m=+23.249316603" watchObservedRunningTime="2026-04-28 00:15:55.880736046 +0000 UTC m=+24.265527851" Apr 28 00:15:55.927888 containerd[1482]: time="2026-04-28T00:15:55.927793689Z" level=info msg="shim disconnected" id=b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f namespace=k8s.io Apr 28 00:15:55.928078 containerd[1482]: time="2026-04-28T00:15:55.927902603Z" level=warning msg="cleaning up after shim disconnected" id=b2418ec8b2a52a69b57a3a8bcfa27e8b7d8b1ec3e37d25623c0ae896e7d4353f namespace=k8s.io Apr 28 00:15:55.928078 containerd[1482]: time="2026-04-28T00:15:55.927915007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:15:56.753951 kubelet[2593]: E0428 00:15:56.753837 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:15:56.866920 containerd[1482]: time="2026-04-28T00:15:56.864533798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 28 00:15:58.754163 kubelet[2593]: E0428 00:15:58.753837 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:16:00.753781 kubelet[2593]: E0428 00:16:00.753353 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:16:01.328871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704126868.mount: Deactivated successfully. Apr 28 00:16:01.354243 containerd[1482]: time="2026-04-28T00:16:01.354149471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:01.358462 containerd[1482]: time="2026-04-28T00:16:01.357115434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=153029581" Apr 28 00:16:01.359541 containerd[1482]: time="2026-04-28T00:16:01.359499104Z" level=info msg="ImageCreate event name:\"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:01.362546 containerd[1482]: time="2026-04-28T00:16:01.362489633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:01.363635 containerd[1482]: time="2026-04-28T00:16:01.363005472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"153029443\" in 4.498392211s" Apr 28 00:16:01.363635 containerd[1482]: time="2026-04-28T00:16:01.363040160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\"" Apr 28 00:16:01.369750 containerd[1482]: time="2026-04-28T00:16:01.369694974Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 28 00:16:01.389708 containerd[1482]: time="2026-04-28T00:16:01.389638931Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819\"" Apr 28 00:16:01.391691 containerd[1482]: time="2026-04-28T00:16:01.391624229Z" level=info msg="StartContainer for \"41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819\"" Apr 28 00:16:01.425151 systemd[1]: Started cri-containerd-41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819.scope - libcontainer container 41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819. Apr 28 00:16:01.459375 containerd[1482]: time="2026-04-28T00:16:01.459317993Z" level=info msg="StartContainer for \"41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819\" returns successfully" Apr 28 00:16:01.572858 systemd[1]: cri-containerd-41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819.scope: Deactivated successfully. Apr 28 00:16:01.752926 containerd[1482]: time="2026-04-28T00:16:01.752582035Z" level=info msg="shim disconnected" id=41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819 namespace=k8s.io Apr 28 00:16:01.752926 containerd[1482]: time="2026-04-28T00:16:01.752642209Z" level=warning msg="cleaning up after shim disconnected" id=41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819 namespace=k8s.io Apr 28 00:16:01.752926 containerd[1482]: time="2026-04-28T00:16:01.752651011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:01.772560 containerd[1482]: time="2026-04-28T00:16:01.772490464Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:16:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:16:01.878397 containerd[1482]: time="2026-04-28T00:16:01.878299495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 28 00:16:02.329922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41225fef0a1585b18575f70b904e13eb8edf767f6c75ca91dc7ed92b597a6819-rootfs.mount: Deactivated successfully. Apr 28 00:16:02.753787 kubelet[2593]: E0428 00:16:02.753559 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:16:04.366023 containerd[1482]: time="2026-04-28T00:16:04.365498546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:04.366865 containerd[1482]: time="2026-04-28T00:16:04.366786006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=62266008" Apr 28 00:16:04.368038 containerd[1482]: time="2026-04-28T00:16:04.368004172Z" level=info msg="ImageCreate event name:\"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:04.370764 containerd[1482]: time="2026-04-28T00:16:04.370633744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:04.372078 containerd[1482]: time="2026-04-28T00:16:04.371914643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"64841647\" in 2.493561376s" Apr 28 00:16:04.372078 containerd[1482]: time="2026-04-28T00:16:04.371961612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\"" Apr 28 00:16:04.378699 containerd[1482]: time="2026-04-28T00:16:04.378313536Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 00:16:04.395182 containerd[1482]: time="2026-04-28T00:16:04.395123772Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164\"" Apr 28 00:16:04.397673 containerd[1482]: time="2026-04-28T00:16:04.396000870Z" level=info msg="StartContainer for \"b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164\"" Apr 28 00:16:04.434197 systemd[1]: Started cri-containerd-b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164.scope - libcontainer container b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164. Apr 28 00:16:04.470891 containerd[1482]: time="2026-04-28T00:16:04.470517527Z" level=info msg="StartContainer for \"b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164\" returns successfully" Apr 28 00:16:04.754935 kubelet[2593]: E0428 00:16:04.753275 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w9gwt" podUID="de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3" Apr 28 00:16:05.008015 containerd[1482]: time="2026-04-28T00:16:05.007486175Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:16:05.010787 systemd[1]: cri-containerd-b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164.scope: Deactivated successfully. Apr 28 00:16:05.034545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164-rootfs.mount: Deactivated successfully. Apr 28 00:16:05.053536 kubelet[2593]: I0428 00:16:05.051397 2593 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 28 00:16:05.094974 containerd[1482]: time="2026-04-28T00:16:05.094627578Z" level=info msg="shim disconnected" id=b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164 namespace=k8s.io Apr 28 00:16:05.094974 containerd[1482]: time="2026-04-28T00:16:05.094966324Z" level=warning msg="cleaning up after shim disconnected" id=b756e70827628543897373e5a4ac594d56b0adbeb52e0ec616e79129b0ebc164 namespace=k8s.io Apr 28 00:16:05.098074 containerd[1482]: time="2026-04-28T00:16:05.095000090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:05.131784 systemd[1]: Created slice kubepods-burstable-pod962851dd_9192_4cce_9ab2_039449d01d65.slice - libcontainer container kubepods-burstable-pod962851dd_9192_4cce_9ab2_039449d01d65.slice. Apr 28 00:16:05.144833 systemd[1]: Created slice kubepods-besteffort-pod9fe175ef_faaa_484e_a8c4_f822ee0538e6.slice - libcontainer container kubepods-besteffort-pod9fe175ef_faaa_484e_a8c4_f822ee0538e6.slice. Apr 28 00:16:05.159016 systemd[1]: Created slice kubepods-besteffort-pod8d7f6caa_ecef_41c4_8a1f_7174e0b0ebd9.slice - libcontainer container kubepods-besteffort-pod8d7f6caa_ecef_41c4_8a1f_7174e0b0ebd9.slice. Apr 28 00:16:05.163371 systemd[1]: Created slice kubepods-burstable-pod2b3bdf19_11df_48e0_a735_59ffdd0aa820.slice - libcontainer container kubepods-burstable-pod2b3bdf19_11df_48e0_a735_59ffdd0aa820.slice. Apr 28 00:16:05.170488 systemd[1]: Created slice kubepods-besteffort-podfbde6d47_517b_4e34_9be8_87b093cf01ab.slice - libcontainer container kubepods-besteffort-podfbde6d47_517b_4e34_9be8_87b093cf01ab.slice. Apr 28 00:16:05.172771 kubelet[2593]: I0428 00:16:05.172005 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-ca-bundle\") pod \"whisker-778646646c-g6l5c\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.172771 kubelet[2593]: I0428 00:16:05.172096 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4tc\" (UniqueName: \"kubernetes.io/projected/fbde6d47-517b-4e34-9be8-87b093cf01ab-kube-api-access-7j4tc\") pod \"whisker-778646646c-g6l5c\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.172771 kubelet[2593]: I0428 00:16:05.172125 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-nginx-config\") pod \"whisker-778646646c-g6l5c\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.172771 kubelet[2593]: I0428 00:16:05.172170 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-backend-key-pair\") pod \"whisker-778646646c-g6l5c\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.172771 kubelet[2593]: I0428 00:16:05.172195 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9-tigera-ca-bundle\") pod \"calico-kube-controllers-6d9b95f8d-872zf\" (UID: \"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9\") " pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" Apr 28 00:16:05.173085 kubelet[2593]: I0428 00:16:05.172238 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7t7\" (UniqueName: \"kubernetes.io/projected/9fe175ef-faaa-484e-a8c4-f822ee0538e6-kube-api-access-7m7t7\") pod \"calico-apiserver-57fccbd9fd-k8kd4\" (UID: \"9fe175ef-faaa-484e-a8c4-f822ee0538e6\") " pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" Apr 28 00:16:05.173085 kubelet[2593]: I0428 00:16:05.172263 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3bfb0bf9-7964-4238-9b69-c54d44a2b3dc-calico-apiserver-certs\") pod \"calico-apiserver-57fccbd9fd-fbwcj\" (UID: \"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc\") " pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" Apr 28 00:16:05.173085 kubelet[2593]: I0428 00:16:05.172283 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8pd\" (UniqueName: \"kubernetes.io/projected/8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9-kube-api-access-4w8pd\") pod \"calico-kube-controllers-6d9b95f8d-872zf\" (UID: \"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9\") " pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" Apr 28 00:16:05.173085 kubelet[2593]: I0428 00:16:05.172328 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sgh\" (UniqueName: \"kubernetes.io/projected/2b3bdf19-11df-48e0-a735-59ffdd0aa820-kube-api-access-b9sgh\") pod \"coredns-66bc5c9577-x848s\" (UID: \"2b3bdf19-11df-48e0-a735-59ffdd0aa820\") " pod="kube-system/coredns-66bc5c9577-x848s" Apr 28 00:16:05.173085 kubelet[2593]: I0428 00:16:05.172351 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9fe175ef-faaa-484e-a8c4-f822ee0538e6-calico-apiserver-certs\") pod \"calico-apiserver-57fccbd9fd-k8kd4\" (UID: \"9fe175ef-faaa-484e-a8c4-f822ee0538e6\") " pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" Apr 28 00:16:05.173199 kubelet[2593]: I0428 00:16:05.172390 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brq5n\" (UniqueName: \"kubernetes.io/projected/3bfb0bf9-7964-4238-9b69-c54d44a2b3dc-kube-api-access-brq5n\") pod \"calico-apiserver-57fccbd9fd-fbwcj\" (UID: \"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc\") " pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" Apr 28 00:16:05.173199 kubelet[2593]: I0428 00:16:05.172421 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b3bdf19-11df-48e0-a735-59ffdd0aa820-config-volume\") pod \"coredns-66bc5c9577-x848s\" (UID: \"2b3bdf19-11df-48e0-a735-59ffdd0aa820\") " pod="kube-system/coredns-66bc5c9577-x848s" Apr 28 00:16:05.173199 kubelet[2593]: I0428 00:16:05.172444 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64f11de-3264-4d46-8dd1-0c473c7b70ea-goldmane-ca-bundle\") pod \"goldmane-6b4b7f4496-57sjc\" (UID: \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\") " pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.173199 kubelet[2593]: I0428 00:16:05.172489 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2lv\" (UniqueName: \"kubernetes.io/projected/b64f11de-3264-4d46-8dd1-0c473c7b70ea-kube-api-access-md2lv\") pod \"goldmane-6b4b7f4496-57sjc\" (UID: \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\") " pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.173199 kubelet[2593]: I0428 00:16:05.172514 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/962851dd-9192-4cce-9ab2-039449d01d65-config-volume\") pod \"coredns-66bc5c9577-wphhk\" (UID: \"962851dd-9192-4cce-9ab2-039449d01d65\") " pod="kube-system/coredns-66bc5c9577-wphhk" Apr 28 00:16:05.173311 kubelet[2593]: I0428 00:16:05.172555 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db87\" (UniqueName: \"kubernetes.io/projected/962851dd-9192-4cce-9ab2-039449d01d65-kube-api-access-4db87\") pod \"coredns-66bc5c9577-wphhk\" (UID: \"962851dd-9192-4cce-9ab2-039449d01d65\") " pod="kube-system/coredns-66bc5c9577-wphhk" Apr 28 00:16:05.173311 kubelet[2593]: I0428 00:16:05.172609 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64f11de-3264-4d46-8dd1-0c473c7b70ea-config\") pod \"goldmane-6b4b7f4496-57sjc\" (UID: \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\") " pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.173311 kubelet[2593]: I0428 00:16:05.172661 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b64f11de-3264-4d46-8dd1-0c473c7b70ea-goldmane-key-pair\") pod \"goldmane-6b4b7f4496-57sjc\" (UID: \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\") " pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.199503 systemd[1]: Created slice kubepods-besteffort-podb64f11de_3264_4d46_8dd1_0c473c7b70ea.slice - libcontainer container kubepods-besteffort-podb64f11de_3264_4d46_8dd1_0c473c7b70ea.slice. Apr 28 00:16:05.203794 systemd[1]: Created slice kubepods-besteffort-pod3bfb0bf9_7964_4238_9b69_c54d44a2b3dc.slice - libcontainer container kubepods-besteffort-pod3bfb0bf9_7964_4238_9b69_c54d44a2b3dc.slice. Apr 28 00:16:05.443125 containerd[1482]: time="2026-04-28T00:16:05.443061565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wphhk,Uid:962851dd-9192-4cce-9ab2-039449d01d65,Namespace:kube-system,Attempt:0,}" Apr 28 00:16:05.469236 containerd[1482]: time="2026-04-28T00:16:05.469173864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b95f8d-872zf,Uid:8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:05.479895 containerd[1482]: time="2026-04-28T00:16:05.479514667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x848s,Uid:2b3bdf19-11df-48e0-a735-59ffdd0aa820,Namespace:kube-system,Attempt:0,}" Apr 28 00:16:05.480740 containerd[1482]: time="2026-04-28T00:16:05.480695536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-k8kd4,Uid:9fe175ef-faaa-484e-a8c4-f822ee0538e6,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:05.500190 containerd[1482]: time="2026-04-28T00:16:05.500148345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778646646c-g6l5c,Uid:fbde6d47-517b-4e34-9be8-87b093cf01ab,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:05.512914 containerd[1482]: time="2026-04-28T00:16:05.512755747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-57sjc,Uid:b64f11de-3264-4d46-8dd1-0c473c7b70ea,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:05.520365 containerd[1482]: time="2026-04-28T00:16:05.520023676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-fbwcj,Uid:3bfb0bf9-7964-4238-9b69-c54d44a2b3dc,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:05.637281 containerd[1482]: time="2026-04-28T00:16:05.637162610Z" level=error msg="Failed to destroy network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.638158 containerd[1482]: time="2026-04-28T00:16:05.638061505Z" level=error msg="encountered an error cleaning up failed sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.639569 containerd[1482]: time="2026-04-28T00:16:05.638182048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wphhk,Uid:962851dd-9192-4cce-9ab2-039449d01d65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.639621 kubelet[2593]: E0428 00:16:05.638437 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.639621 kubelet[2593]: E0428 00:16:05.638505 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wphhk" Apr 28 00:16:05.639621 kubelet[2593]: E0428 00:16:05.638528 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wphhk" Apr 28 00:16:05.639726 kubelet[2593]: E0428 00:16:05.638604 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wphhk_kube-system(962851dd-9192-4cce-9ab2-039449d01d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wphhk_kube-system(962851dd-9192-4cce-9ab2-039449d01d65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wphhk" podUID="962851dd-9192-4cce-9ab2-039449d01d65" Apr 28 00:16:05.684949 containerd[1482]: time="2026-04-28T00:16:05.684676936Z" level=error msg="Failed to destroy network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.685169 containerd[1482]: time="2026-04-28T00:16:05.685083015Z" level=error msg="encountered an error cleaning up failed sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.685169 containerd[1482]: time="2026-04-28T00:16:05.685158629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x848s,Uid:2b3bdf19-11df-48e0-a735-59ffdd0aa820,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.685904 kubelet[2593]: E0428 00:16:05.685457 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.685904 kubelet[2593]: E0428 00:16:05.685511 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x848s" Apr 28 00:16:05.685904 kubelet[2593]: E0428 00:16:05.685530 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x848s" Apr 28 00:16:05.686020 kubelet[2593]: E0428 00:16:05.685587 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x848s_kube-system(2b3bdf19-11df-48e0-a735-59ffdd0aa820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x848s_kube-system(2b3bdf19-11df-48e0-a735-59ffdd0aa820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x848s" podUID="2b3bdf19-11df-48e0-a735-59ffdd0aa820" Apr 28 00:16:05.703875 containerd[1482]: time="2026-04-28T00:16:05.703308586Z" level=error msg="Failed to destroy network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.708444 containerd[1482]: time="2026-04-28T00:16:05.707393777Z" level=error msg="encountered an error cleaning up failed sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.708444 containerd[1482]: time="2026-04-28T00:16:05.707472232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778646646c-g6l5c,Uid:fbde6d47-517b-4e34-9be8-87b093cf01ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.709161 kubelet[2593]: E0428 00:16:05.707788 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.709161 kubelet[2593]: E0428 00:16:05.707845 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.709161 kubelet[2593]: E0428 00:16:05.707878 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778646646c-g6l5c" Apr 28 00:16:05.709430 kubelet[2593]: E0428 00:16:05.707940 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778646646c-g6l5c_calico-system(fbde6d47-517b-4e34-9be8-87b093cf01ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778646646c-g6l5c_calico-system(fbde6d47-517b-4e34-9be8-87b093cf01ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778646646c-g6l5c" podUID="fbde6d47-517b-4e34-9be8-87b093cf01ab" Apr 28 00:16:05.722253 containerd[1482]: time="2026-04-28T00:16:05.722119710Z" level=error msg="Failed to destroy network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.723613 containerd[1482]: time="2026-04-28T00:16:05.722758154Z" level=error msg="encountered an error cleaning up failed sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.723613 containerd[1482]: time="2026-04-28T00:16:05.722808484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b95f8d-872zf,Uid:8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.723934 kubelet[2593]: E0428 00:16:05.723082 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.723934 kubelet[2593]: E0428 00:16:05.723143 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" Apr 28 00:16:05.723934 kubelet[2593]: E0428 00:16:05.723168 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" Apr 28 00:16:05.724073 kubelet[2593]: E0428 00:16:05.723217 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d9b95f8d-872zf_calico-system(8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d9b95f8d-872zf_calico-system(8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" podUID="8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9" Apr 28 00:16:05.759447 containerd[1482]: time="2026-04-28T00:16:05.759266307Z" level=error msg="Failed to destroy network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.759881 containerd[1482]: time="2026-04-28T00:16:05.759829136Z" level=error msg="encountered an error cleaning up failed sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.760154 containerd[1482]: time="2026-04-28T00:16:05.760097828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-k8kd4,Uid:9fe175ef-faaa-484e-a8c4-f822ee0538e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.760810 kubelet[2593]: E0428 00:16:05.760469 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.760810 kubelet[2593]: E0428 00:16:05.760523 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" Apr 28 00:16:05.760810 kubelet[2593]: E0428 00:16:05.760545 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" Apr 28 00:16:05.761225 kubelet[2593]: E0428 00:16:05.760772 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57fccbd9fd-k8kd4_calico-system(9fe175ef-faaa-484e-a8c4-f822ee0538e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57fccbd9fd-k8kd4_calico-system(9fe175ef-faaa-484e-a8c4-f822ee0538e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" podUID="9fe175ef-faaa-484e-a8c4-f822ee0538e6" Apr 28 00:16:05.773578 containerd[1482]: time="2026-04-28T00:16:05.772458423Z" level=error msg="Failed to destroy network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.773578 containerd[1482]: time="2026-04-28T00:16:05.772824894Z" level=error msg="encountered an error cleaning up failed sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.773578 containerd[1482]: time="2026-04-28T00:16:05.772896068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-fbwcj,Uid:3bfb0bf9-7964-4238-9b69-c54d44a2b3dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.774074 kubelet[2593]: E0428 00:16:05.773161 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.774074 kubelet[2593]: E0428 00:16:05.773224 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" Apr 28 00:16:05.774074 kubelet[2593]: E0428 00:16:05.773246 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" Apr 28 00:16:05.774397 kubelet[2593]: E0428 00:16:05.773304 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57fccbd9fd-fbwcj_calico-system(3bfb0bf9-7964-4238-9b69-c54d44a2b3dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57fccbd9fd-fbwcj_calico-system(3bfb0bf9-7964-4238-9b69-c54d44a2b3dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" podUID="3bfb0bf9-7964-4238-9b69-c54d44a2b3dc" Apr 28 00:16:05.786779 containerd[1482]: time="2026-04-28T00:16:05.786715065Z" level=error msg="Failed to destroy network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.787506 containerd[1482]: time="2026-04-28T00:16:05.787469851Z" level=error msg="encountered an error cleaning up failed sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.787767 containerd[1482]: time="2026-04-28T00:16:05.787566350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-57sjc,Uid:b64f11de-3264-4d46-8dd1-0c473c7b70ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.789501 kubelet[2593]: E0428 00:16:05.788196 2593 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:05.789501 kubelet[2593]: E0428 00:16:05.788271 2593 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.789501 kubelet[2593]: E0428 00:16:05.788300 2593 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-57sjc" Apr 28 00:16:05.789722 kubelet[2593]: E0428 00:16:05.788354 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-6b4b7f4496-57sjc_calico-system(b64f11de-3264-4d46-8dd1-0c473c7b70ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-6b4b7f4496-57sjc_calico-system(b64f11de-3264-4d46-8dd1-0c473c7b70ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-57sjc" podUID="b64f11de-3264-4d46-8dd1-0c473c7b70ea" Apr 28 00:16:05.895114 kubelet[2593]: I0428 00:16:05.893547 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:05.895268 containerd[1482]: time="2026-04-28T00:16:05.894556399Z" level=info msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" Apr 28 00:16:05.895268 containerd[1482]: time="2026-04-28T00:16:05.894761798Z" level=info msg="Ensure that sandbox 9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854 in task-service has been cleanup successfully" Apr 28 00:16:05.899304 kubelet[2593]: I0428 00:16:05.899266 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:05.900586 containerd[1482]: time="2026-04-28T00:16:05.900545239Z" level=info msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" Apr 28 00:16:05.902138 containerd[1482]: time="2026-04-28T00:16:05.902075415Z" level=info msg="Ensure that sandbox 8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee in task-service has been cleanup successfully" Apr 28 00:16:05.920240 kubelet[2593]: I0428 00:16:05.920197 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:05.925885 containerd[1482]: time="2026-04-28T00:16:05.923786502Z" level=info msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" Apr 28 00:16:05.925885 containerd[1482]: time="2026-04-28T00:16:05.924244671Z" level=info msg="Ensure that sandbox b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227 in task-service has been cleanup successfully" Apr 28 00:16:05.934924 kubelet[2593]: I0428 00:16:05.932456 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:05.946675 kubelet[2593]: I0428 00:16:05.946636 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:05.946948 containerd[1482]: time="2026-04-28T00:16:05.943814942Z" level=info msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" Apr 28 00:16:05.947168 containerd[1482]: time="2026-04-28T00:16:05.945923151Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 28 00:16:05.948986 containerd[1482]: time="2026-04-28T00:16:05.948946576Z" level=info msg="Ensure that sandbox a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef in task-service has been cleanup successfully" Apr 28 00:16:05.950146 containerd[1482]: time="2026-04-28T00:16:05.950097599Z" level=info msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" Apr 28 00:16:05.955137 containerd[1482]: time="2026-04-28T00:16:05.953711780Z" level=info msg="Ensure that sandbox 649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd in task-service has been cleanup successfully" Apr 28 00:16:05.974197 kubelet[2593]: I0428 00:16:05.974148 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:05.979952 containerd[1482]: time="2026-04-28T00:16:05.979369711Z" level=info msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" Apr 28 00:16:05.979952 containerd[1482]: time="2026-04-28T00:16:05.979568029Z" level=info msg="Ensure that sandbox 8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96 in task-service has been cleanup successfully" Apr 28 00:16:05.992347 kubelet[2593]: I0428 00:16:05.992304 2593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:05.996307 containerd[1482]: time="2026-04-28T00:16:05.996138960Z" level=info msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" Apr 28 00:16:05.996948 containerd[1482]: time="2026-04-28T00:16:05.996911669Z" level=info msg="Ensure that sandbox 7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c in task-service has been cleanup successfully" Apr 28 00:16:06.056945 containerd[1482]: time="2026-04-28T00:16:06.056766830Z" level=info msg="CreateContainer within sandbox \"e73fe1febcebd600bf54945ca98e92b4c90bf17b296de2e02c5a315ddb4eb741\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4\"" Apr 28 00:16:06.058318 containerd[1482]: time="2026-04-28T00:16:06.058274310Z" level=info msg="StartContainer for \"6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4\"" Apr 28 00:16:06.118315 containerd[1482]: time="2026-04-28T00:16:06.118257903Z" level=error msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" failed" error="failed to destroy network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.118656 kubelet[2593]: E0428 00:16:06.118504 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:06.118656 kubelet[2593]: E0428 00:16:06.118568 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c"} Apr 28 00:16:06.118656 kubelet[2593]: E0428 00:16:06.118647 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fe175ef-faaa-484e-a8c4-f822ee0538e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.118825 kubelet[2593]: E0428 00:16:06.118672 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fe175ef-faaa-484e-a8c4-f822ee0538e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" podUID="9fe175ef-faaa-484e-a8c4-f822ee0538e6" Apr 28 00:16:06.120879 containerd[1482]: time="2026-04-28T00:16:06.120813098Z" level=error msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" failed" error="failed to destroy network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.121134 kubelet[2593]: E0428 00:16:06.121053 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:06.121283 kubelet[2593]: E0428 00:16:06.121142 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854"} Apr 28 00:16:06.121283 kubelet[2593]: E0428 00:16:06.121172 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.121283 kubelet[2593]: E0428 00:16:06.121210 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" podUID="8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9" Apr 28 00:16:06.136641 containerd[1482]: time="2026-04-28T00:16:06.136125866Z" level=error msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" failed" error="failed to destroy network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.136791 containerd[1482]: time="2026-04-28T00:16:06.136669487Z" level=error msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" failed" error="failed to destroy network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.137394 kubelet[2593]: E0428 00:16:06.137088 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:06.137394 kubelet[2593]: E0428 00:16:06.137117 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:06.137394 kubelet[2593]: E0428 00:16:06.137142 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee"} Apr 28 00:16:06.137394 kubelet[2593]: E0428 00:16:06.137162 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227"} Apr 28 00:16:06.137394 kubelet[2593]: E0428 00:16:06.137174 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"962851dd-9192-4cce-9ab2-039449d01d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.137610 kubelet[2593]: E0428 00:16:06.137193 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.137610 kubelet[2593]: E0428 00:16:06.137200 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"962851dd-9192-4cce-9ab2-039449d01d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wphhk" podUID="962851dd-9192-4cce-9ab2-039449d01d65" Apr 28 00:16:06.137610 kubelet[2593]: E0428 00:16:06.137215 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" podUID="3bfb0bf9-7964-4238-9b69-c54d44a2b3dc" Apr 28 00:16:06.143544 containerd[1482]: time="2026-04-28T00:16:06.143452428Z" level=error msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" failed" error="failed to destroy network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.143762 kubelet[2593]: E0428 00:16:06.143709 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:06.143815 kubelet[2593]: E0428 00:16:06.143771 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd"} Apr 28 00:16:06.143878 kubelet[2593]: E0428 00:16:06.143804 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b3bdf19-11df-48e0-a735-59ffdd0aa820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.143878 kubelet[2593]: E0428 00:16:06.143843 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b3bdf19-11df-48e0-a735-59ffdd0aa820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x848s" podUID="2b3bdf19-11df-48e0-a735-59ffdd0aa820" Apr 28 00:16:06.151495 containerd[1482]: time="2026-04-28T00:16:06.151173944Z" level=error msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" failed" error="failed to destroy network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.153026 kubelet[2593]: E0428 00:16:06.152985 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:06.153152 kubelet[2593]: E0428 00:16:06.153036 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef"} Apr 28 00:16:06.153152 kubelet[2593]: E0428 00:16:06.153077 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.153152 kubelet[2593]: E0428 00:16:06.153105 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778646646c-g6l5c" podUID="fbde6d47-517b-4e34-9be8-87b093cf01ab" Apr 28 00:16:06.157409 systemd[1]: Started cri-containerd-6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4.scope - libcontainer container 6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4. Apr 28 00:16:06.161456 containerd[1482]: time="2026-04-28T00:16:06.161371880Z" level=error msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" failed" error="failed to destroy network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:16:06.161977 kubelet[2593]: E0428 00:16:06.161766 2593 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:06.161977 kubelet[2593]: E0428 00:16:06.161820 2593 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96"} Apr 28 00:16:06.161977 kubelet[2593]: E0428 00:16:06.161897 2593 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 28 00:16:06.161977 kubelet[2593]: E0428 00:16:06.161936 2593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b64f11de-3264-4d46-8dd1-0c473c7b70ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-57sjc" podUID="b64f11de-3264-4d46-8dd1-0c473c7b70ea" Apr 28 00:16:06.194945 containerd[1482]: time="2026-04-28T00:16:06.194822300Z" level=info msg="StartContainer for \"6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4\" returns successfully" Apr 28 00:16:06.403818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854-shm.mount: Deactivated successfully. Apr 28 00:16:06.403944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee-shm.mount: Deactivated successfully. Apr 28 00:16:06.760985 systemd[1]: Created slice kubepods-besteffort-podde0a3e77_2dc7_403c_b5d8_86f1eb78c3c3.slice - libcontainer container kubepods-besteffort-podde0a3e77_2dc7_403c_b5d8_86f1eb78c3c3.slice. Apr 28 00:16:06.766866 containerd[1482]: time="2026-04-28T00:16:06.766761886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9gwt,Uid:de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:06.954966 systemd-networkd[1381]: caliaa1da36da65: Link UP Apr 28 00:16:06.955273 systemd-networkd[1381]: caliaa1da36da65: Gained carrier Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.808 [ERROR][3758] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.834 [INFO][3758] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0 csi-node-driver- calico-system de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3 698 0 2026-04-28 00:15:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:95f96f7df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 csi-node-driver-w9gwt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa1da36da65 [] [] }} ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.835 [INFO][3758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.884 [INFO][3769] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" HandleID="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Workload="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.898 [INFO][3769] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" HandleID="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Workload="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efde0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"csi-node-driver-w9gwt", "timestamp":"2026-04-28 00:16:06.884233649 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000e9080)} Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.898 [INFO][3769] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.898 [INFO][3769] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.898 [INFO][3769] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.902 [INFO][3769] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.908 [INFO][3769] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.914 [INFO][3769] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.917 [INFO][3769] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.920 [INFO][3769] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.920 [INFO][3769] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.922 [INFO][3769] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5 Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.928 [INFO][3769] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.935 [INFO][3769] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.193/26] block=192.168.26.192/26 handle="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.935 [INFO][3769] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.193/26] handle="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.935 [INFO][3769] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:06.977418 containerd[1482]: 2026-04-28 00:16:06.935 [INFO][3769] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.193/26] IPv6=[] ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" HandleID="k8s-pod-network.63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Workload="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.940 [INFO][3758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"csi-node-driver-w9gwt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa1da36da65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.940 [INFO][3758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.193/32] ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.940 [INFO][3758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa1da36da65 ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.956 [INFO][3758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.958 [INFO][3758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5", Pod:"csi-node-driver-w9gwt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa1da36da65", MAC:"ae:24:d9:ff:3d:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:06.978762 containerd[1482]: 2026-04-28 00:16:06.973 [INFO][3758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5" Namespace="calico-system" Pod="csi-node-driver-w9gwt" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-csi--node--driver--w9gwt-eth0" Apr 28 00:16:06.997470 containerd[1482]: time="2026-04-28T00:16:06.996467838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:06.997470 containerd[1482]: time="2026-04-28T00:16:06.996530049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:06.997470 containerd[1482]: time="2026-04-28T00:16:06.996545932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:06.997689 containerd[1482]: time="2026-04-28T00:16:06.996645031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:07.010036 containerd[1482]: time="2026-04-28T00:16:07.009982364Z" level=info msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" Apr 28 00:16:07.055516 systemd[1]: Started cri-containerd-63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5.scope - libcontainer container 63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5. Apr 28 00:16:07.075311 kubelet[2593]: I0428 00:16:07.075224 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mklj8" podStartSLOduration=3.260721966 podStartE2EDuration="16.075205414s" podCreationTimestamp="2026-04-28 00:15:51 +0000 UTC" firstStartedPulling="2026-04-28 00:15:51.558797991 +0000 UTC m=+19.943589796" lastFinishedPulling="2026-04-28 00:16:04.373281439 +0000 UTC m=+32.758073244" observedRunningTime="2026-04-28 00:16:07.069405178 +0000 UTC m=+35.454196983" watchObservedRunningTime="2026-04-28 00:16:07.075205414 +0000 UTC m=+35.459997219" Apr 28 00:16:07.115290 containerd[1482]: time="2026-04-28T00:16:07.115129226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w9gwt,Uid:de0a3e77-2dc7-403c-b5d8-86f1eb78c3c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5\"" Apr 28 00:16:07.119606 containerd[1482]: time="2026-04-28T00:16:07.119567658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" iface="eth0" netns="/var/run/netns/cni-ab041cc7-ed5f-743b-528f-8826cf595e7f" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" iface="eth0" netns="/var/run/netns/cni-ab041cc7-ed5f-743b-528f-8826cf595e7f" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" iface="eth0" netns="/var/run/netns/cni-ab041cc7-ed5f-743b-528f-8826cf595e7f" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.150 [INFO][3823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.181 [INFO][3863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.182 [INFO][3863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.182 [INFO][3863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.192 [WARNING][3863] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.192 [INFO][3863] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.196 [INFO][3863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:07.201145 containerd[1482]: 2026-04-28 00:16:07.198 [INFO][3823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:07.202230 containerd[1482]: time="2026-04-28T00:16:07.201334904Z" level=info msg="TearDown network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" successfully" Apr 28 00:16:07.202230 containerd[1482]: time="2026-04-28T00:16:07.201368030Z" level=info msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" returns successfully" Apr 28 00:16:07.295242 kubelet[2593]: I0428 00:16:07.295148 2593 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-nginx-config\") pod \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " Apr 28 00:16:07.295483 kubelet[2593]: I0428 00:16:07.295256 2593 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-backend-key-pair\") pod \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " Apr 28 00:16:07.295483 kubelet[2593]: I0428 00:16:07.295314 2593 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-ca-bundle\") pod \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " Apr 28 00:16:07.295483 kubelet[2593]: I0428 00:16:07.295355 2593 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4tc\" (UniqueName: \"kubernetes.io/projected/fbde6d47-517b-4e34-9be8-87b093cf01ab-kube-api-access-7j4tc\") pod \"fbde6d47-517b-4e34-9be8-87b093cf01ab\" (UID: \"fbde6d47-517b-4e34-9be8-87b093cf01ab\") " Apr 28 00:16:07.300161 kubelet[2593]: I0428 00:16:07.300086 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbde6d47-517b-4e34-9be8-87b093cf01ab-kube-api-access-7j4tc" (OuterVolumeSpecName: "kube-api-access-7j4tc") pod "fbde6d47-517b-4e34-9be8-87b093cf01ab" (UID: "fbde6d47-517b-4e34-9be8-87b093cf01ab"). InnerVolumeSpecName "kube-api-access-7j4tc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 00:16:07.302890 kubelet[2593]: I0428 00:16:07.300548 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fbde6d47-517b-4e34-9be8-87b093cf01ab" (UID: "fbde6d47-517b-4e34-9be8-87b093cf01ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:16:07.302890 kubelet[2593]: I0428 00:16:07.300802 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "fbde6d47-517b-4e34-9be8-87b093cf01ab" (UID: "fbde6d47-517b-4e34-9be8-87b093cf01ab"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:16:07.304684 kubelet[2593]: I0428 00:16:07.304634 2593 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fbde6d47-517b-4e34-9be8-87b093cf01ab" (UID: "fbde6d47-517b-4e34-9be8-87b093cf01ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 00:16:07.395805 kubelet[2593]: I0428 00:16:07.395719 2593 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-backend-key-pair\") on node \"ci-4081-3-7-n-d098215774\" DevicePath \"\"" Apr 28 00:16:07.395805 kubelet[2593]: I0428 00:16:07.395760 2593 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-whisker-ca-bundle\") on node \"ci-4081-3-7-n-d098215774\" DevicePath \"\"" Apr 28 00:16:07.395805 kubelet[2593]: I0428 00:16:07.395771 2593 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7j4tc\" (UniqueName: \"kubernetes.io/projected/fbde6d47-517b-4e34-9be8-87b093cf01ab-kube-api-access-7j4tc\") on node \"ci-4081-3-7-n-d098215774\" DevicePath \"\"" Apr 28 00:16:07.395805 kubelet[2593]: I0428 00:16:07.395781 2593 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fbde6d47-517b-4e34-9be8-87b093cf01ab-nginx-config\") on node \"ci-4081-3-7-n-d098215774\" DevicePath \"\"" Apr 28 00:16:07.396106 systemd[1]: run-netns-cni\x2dab041cc7\x2ded5f\x2d743b\x2d528f\x2d8826cf595e7f.mount: Deactivated successfully. Apr 28 00:16:07.396507 systemd[1]: var-lib-kubelet-pods-fbde6d47\x2d517b\x2d4e34\x2d9be8\x2d87b093cf01ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7j4tc.mount: Deactivated successfully. Apr 28 00:16:07.396568 systemd[1]: var-lib-kubelet-pods-fbde6d47\x2d517b\x2d4e34\x2d9be8\x2d87b093cf01ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 28 00:16:07.775589 systemd[1]: Removed slice kubepods-besteffort-podfbde6d47_517b_4e34_9be8_87b093cf01ab.slice - libcontainer container kubepods-besteffort-podfbde6d47_517b_4e34_9be8_87b093cf01ab.slice. Apr 28 00:16:08.111268 systemd[1]: Created slice kubepods-besteffort-podd0231d68_66da_4e07_b411_5ebe2d3dfb6e.slice - libcontainer container kubepods-besteffort-podd0231d68_66da_4e07_b411_5ebe2d3dfb6e.slice. Apr 28 00:16:08.140959 systemd-networkd[1381]: caliaa1da36da65: Gained IPv6LL Apr 28 00:16:08.202795 kubelet[2593]: I0428 00:16:08.202683 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0231d68-66da-4e07-b411-5ebe2d3dfb6e-whisker-backend-key-pair\") pod \"whisker-57d48d8695-95hvc\" (UID: \"d0231d68-66da-4e07-b411-5ebe2d3dfb6e\") " pod="calico-system/whisker-57d48d8695-95hvc" Apr 28 00:16:08.203187 kubelet[2593]: I0428 00:16:08.202811 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtjt\" (UniqueName: \"kubernetes.io/projected/d0231d68-66da-4e07-b411-5ebe2d3dfb6e-kube-api-access-rhtjt\") pod \"whisker-57d48d8695-95hvc\" (UID: \"d0231d68-66da-4e07-b411-5ebe2d3dfb6e\") " pod="calico-system/whisker-57d48d8695-95hvc" Apr 28 00:16:08.203187 kubelet[2593]: I0428 00:16:08.202844 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d0231d68-66da-4e07-b411-5ebe2d3dfb6e-nginx-config\") pod \"whisker-57d48d8695-95hvc\" (UID: \"d0231d68-66da-4e07-b411-5ebe2d3dfb6e\") " pod="calico-system/whisker-57d48d8695-95hvc" Apr 28 00:16:08.203187 kubelet[2593]: I0428 00:16:08.202900 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0231d68-66da-4e07-b411-5ebe2d3dfb6e-whisker-ca-bundle\") pod \"whisker-57d48d8695-95hvc\" (UID: \"d0231d68-66da-4e07-b411-5ebe2d3dfb6e\") " pod="calico-system/whisker-57d48d8695-95hvc" Apr 28 00:16:08.428741 containerd[1482]: time="2026-04-28T00:16:08.428136470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d48d8695-95hvc,Uid:d0231d68-66da-4e07-b411-5ebe2d3dfb6e,Namespace:calico-system,Attempt:0,}" Apr 28 00:16:08.594707 systemd-networkd[1381]: cali604c8a760e6: Link UP Apr 28 00:16:08.595727 systemd-networkd[1381]: cali604c8a760e6: Gained carrier Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.490 [ERROR][3994] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.506 [INFO][3994] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0 whisker-57d48d8695- calico-system d0231d68-66da-4e07-b411-5ebe2d3dfb6e 890 0 2026-04-28 00:16:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57d48d8695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 whisker-57d48d8695-95hvc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali604c8a760e6 [] [] }} ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.507 [INFO][3994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.536 [INFO][4006] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" HandleID="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.549 [INFO][4006] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" HandleID="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffc20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"whisker-57d48d8695-95hvc", "timestamp":"2026-04-28 00:16:08.536679675 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000273080)} Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.550 [INFO][4006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.550 [INFO][4006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.550 [INFO][4006] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.553 [INFO][4006] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.560 [INFO][4006] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.567 [INFO][4006] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.569 [INFO][4006] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.572 [INFO][4006] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.573 [INFO][4006] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.575 [INFO][4006] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030 Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.580 [INFO][4006] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.588 [INFO][4006] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.194/26] block=192.168.26.192/26 handle="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.588 [INFO][4006] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.194/26] handle="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.588 [INFO][4006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:08.622379 containerd[1482]: 2026-04-28 00:16:08.588 [INFO][4006] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.194/26] IPv6=[] ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" HandleID="k8s-pod-network.d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.591 [INFO][3994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0", GenerateName:"whisker-57d48d8695-", Namespace:"calico-system", SelfLink:"", UID:"d0231d68-66da-4e07-b411-5ebe2d3dfb6e", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d48d8695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"whisker-57d48d8695-95hvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali604c8a760e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.591 [INFO][3994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.194/32] ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.591 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali604c8a760e6 ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.600 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.600 [INFO][3994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0", GenerateName:"whisker-57d48d8695-", Namespace:"calico-system", SelfLink:"", UID:"d0231d68-66da-4e07-b411-5ebe2d3dfb6e", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d48d8695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030", Pod:"whisker-57d48d8695-95hvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali604c8a760e6", MAC:"fe:77:1b:11:de:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:08.623378 containerd[1482]: 2026-04-28 00:16:08.619 [INFO][3994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030" Namespace="calico-system" Pod="whisker-57d48d8695-95hvc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--57d48d8695--95hvc-eth0" Apr 28 00:16:08.642219 containerd[1482]: time="2026-04-28T00:16:08.642106103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:08.642453 containerd[1482]: time="2026-04-28T00:16:08.642186717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:08.642453 containerd[1482]: time="2026-04-28T00:16:08.642212361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:08.642453 containerd[1482]: time="2026-04-28T00:16:08.642318100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:08.665075 systemd[1]: Started cri-containerd-d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030.scope - libcontainer container d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030. Apr 28 00:16:08.698644 containerd[1482]: time="2026-04-28T00:16:08.698526874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d48d8695-95hvc,Uid:d0231d68-66da-4e07-b411-5ebe2d3dfb6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030\"" Apr 28 00:16:09.207828 containerd[1482]: time="2026-04-28T00:16:09.206998007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:09.208668 containerd[1482]: time="2026-04-28T00:16:09.208628957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=7895994" Apr 28 00:16:09.210780 containerd[1482]: time="2026-04-28T00:16:09.210712941Z" level=info msg="ImageCreate event name:\"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:09.216904 containerd[1482]: time="2026-04-28T00:16:09.215913921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:09.216904 containerd[1482]: time="2026-04-28T00:16:09.216722575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"10471633\" in 2.09711379s" Apr 28 00:16:09.216904 containerd[1482]: time="2026-04-28T00:16:09.216752220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\"" Apr 28 00:16:09.218946 containerd[1482]: time="2026-04-28T00:16:09.218913377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 28 00:16:09.243822 containerd[1482]: time="2026-04-28T00:16:09.243775328Z" level=info msg="CreateContainer within sandbox \"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 28 00:16:09.268508 containerd[1482]: time="2026-04-28T00:16:09.268464290Z" level=info msg="CreateContainer within sandbox \"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"61bd9f2c56b089b60a0c7068228531c26b245a0e5d2649b167ea272515cd9be2\"" Apr 28 00:16:09.269202 containerd[1482]: time="2026-04-28T00:16:09.269178728Z" level=info msg="StartContainer for \"61bd9f2c56b089b60a0c7068228531c26b245a0e5d2649b167ea272515cd9be2\"" Apr 28 00:16:09.296969 systemd[1]: Started cri-containerd-61bd9f2c56b089b60a0c7068228531c26b245a0e5d2649b167ea272515cd9be2.scope - libcontainer container 61bd9f2c56b089b60a0c7068228531c26b245a0e5d2649b167ea272515cd9be2. Apr 28 00:16:09.331221 containerd[1482]: time="2026-04-28T00:16:09.331169577Z" level=info msg="StartContainer for \"61bd9f2c56b089b60a0c7068228531c26b245a0e5d2649b167ea272515cd9be2\" returns successfully" Apr 28 00:16:09.448367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491586266.mount: Deactivated successfully. Apr 28 00:16:09.759290 kubelet[2593]: I0428 00:16:09.759241 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbde6d47-517b-4e34-9be8-87b093cf01ab" path="/var/lib/kubelet/pods/fbde6d47-517b-4e34-9be8-87b093cf01ab/volumes" Apr 28 00:16:10.060270 systemd-networkd[1381]: cali604c8a760e6: Gained IPv6LL Apr 28 00:16:11.054447 containerd[1482]: time="2026-04-28T00:16:11.054161758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:11.056075 containerd[1482]: time="2026-04-28T00:16:11.055981958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=5896864" Apr 28 00:16:11.056790 containerd[1482]: time="2026-04-28T00:16:11.056727192Z" level=info msg="ImageCreate event name:\"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:11.059549 containerd[1482]: time="2026-04-28T00:16:11.059480295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:11.060811 containerd[1482]: time="2026-04-28T00:16:11.060223569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"8472495\" in 1.841094637s" Apr 28 00:16:11.060811 containerd[1482]: time="2026-04-28T00:16:11.060263936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\"" Apr 28 00:16:11.062693 containerd[1482]: time="2026-04-28T00:16:11.062644701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 28 00:16:11.066751 containerd[1482]: time="2026-04-28T00:16:11.066622833Z" level=info msg="CreateContainer within sandbox \"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 28 00:16:11.084458 containerd[1482]: time="2026-04-28T00:16:11.084275385Z" level=info msg="CreateContainer within sandbox \"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"eb402fd026001fbe180da506f96b34f0571c03a94c621b1271f39d582c8325a5\"" Apr 28 00:16:11.087043 containerd[1482]: time="2026-04-28T00:16:11.085490012Z" level=info msg="StartContainer for \"eb402fd026001fbe180da506f96b34f0571c03a94c621b1271f39d582c8325a5\"" Apr 28 00:16:11.125110 systemd[1]: Started cri-containerd-eb402fd026001fbe180da506f96b34f0571c03a94c621b1271f39d582c8325a5.scope - libcontainer container eb402fd026001fbe180da506f96b34f0571c03a94c621b1271f39d582c8325a5. Apr 28 00:16:11.191835 containerd[1482]: time="2026-04-28T00:16:11.191525345Z" level=info msg="StartContainer for \"eb402fd026001fbe180da506f96b34f0571c03a94c621b1271f39d582c8325a5\" returns successfully" Apr 28 00:16:12.641390 containerd[1482]: time="2026-04-28T00:16:12.641290839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:12.643625 containerd[1482]: time="2026-04-28T00:16:12.643582459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=12456618" Apr 28 00:16:12.644928 containerd[1482]: time="2026-04-28T00:16:12.644836485Z" level=info msg="ImageCreate event name:\"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:12.649831 containerd[1482]: time="2026-04-28T00:16:12.649662481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"15032209\" in 1.586972132s" Apr 28 00:16:12.649831 containerd[1482]: time="2026-04-28T00:16:12.649713968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\"" Apr 28 00:16:12.653878 containerd[1482]: time="2026-04-28T00:16:12.651753151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 28 00:16:12.654903 containerd[1482]: time="2026-04-28T00:16:12.654763557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:12.657014 containerd[1482]: time="2026-04-28T00:16:12.656974365Z" level=info msg="CreateContainer within sandbox \"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 28 00:16:12.677675 containerd[1482]: time="2026-04-28T00:16:12.677547458Z" level=info msg="CreateContainer within sandbox \"63bb0005a660ac059c9c1d4a730ded00bfaeda537fc9dd3b167a6997da77d4a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e5c6d8baf05b116911d1fe8c25bb34d8332df9d94ed4fb8784267fcaf60a65e\"" Apr 28 00:16:12.678987 containerd[1482]: time="2026-04-28T00:16:12.678935183Z" level=info msg="StartContainer for \"5e5c6d8baf05b116911d1fe8c25bb34d8332df9d94ed4fb8784267fcaf60a65e\"" Apr 28 00:16:12.715146 systemd[1]: Started cri-containerd-5e5c6d8baf05b116911d1fe8c25bb34d8332df9d94ed4fb8784267fcaf60a65e.scope - libcontainer container 5e5c6d8baf05b116911d1fe8c25bb34d8332df9d94ed4fb8784267fcaf60a65e. Apr 28 00:16:12.750950 containerd[1482]: time="2026-04-28T00:16:12.750322534Z" level=info msg="StartContainer for \"5e5c6d8baf05b116911d1fe8c25bb34d8332df9d94ed4fb8784267fcaf60a65e\" returns successfully" Apr 28 00:16:12.832811 kubelet[2593]: I0428 00:16:12.832242 2593 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 28 00:16:12.832811 kubelet[2593]: I0428 00:16:12.832306 2593 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 28 00:16:13.058989 kubelet[2593]: I0428 00:16:13.057634 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w9gwt" podStartSLOduration=16.525484 podStartE2EDuration="22.057612202s" podCreationTimestamp="2026-04-28 00:15:51 +0000 UTC" firstStartedPulling="2026-04-28 00:16:07.118322796 +0000 UTC m=+35.503114601" lastFinishedPulling="2026-04-28 00:16:12.650450998 +0000 UTC m=+41.035242803" observedRunningTime="2026-04-28 00:16:13.054577407 +0000 UTC m=+41.439369212" watchObservedRunningTime="2026-04-28 00:16:13.057612202 +0000 UTC m=+41.442404007" Apr 28 00:16:14.961077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085949867.mount: Deactivated successfully. Apr 28 00:16:14.990283 containerd[1482]: time="2026-04-28T00:16:14.989449342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=15624823" Apr 28 00:16:14.990283 containerd[1482]: time="2026-04-28T00:16:14.990231171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:14.991844 containerd[1482]: time="2026-04-28T00:16:14.991809790Z" level=info msg="ImageCreate event name:\"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:14.992756 containerd[1482]: time="2026-04-28T00:16:14.992684551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:14.993581 containerd[1482]: time="2026-04-28T00:16:14.993548951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"15624653\" in 2.341745152s" Apr 28 00:16:14.993687 containerd[1482]: time="2026-04-28T00:16:14.993671328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\"" Apr 28 00:16:14.999436 containerd[1482]: time="2026-04-28T00:16:14.999377400Z" level=info msg="CreateContainer within sandbox \"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 28 00:16:15.019093 containerd[1482]: time="2026-04-28T00:16:15.019035568Z" level=info msg="CreateContainer within sandbox \"d0251b37572c8255b790a96b4db10899b2cfa670fe28c829fee3be44c7b7e030\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0fdf09aa116a6e19a9632b59fe91e3490e21929e3cf48db50d1776622fc1c50f\"" Apr 28 00:16:15.020745 containerd[1482]: time="2026-04-28T00:16:15.020606619Z" level=info msg="StartContainer for \"0fdf09aa116a6e19a9632b59fe91e3490e21929e3cf48db50d1776622fc1c50f\"" Apr 28 00:16:15.062083 systemd[1]: Started cri-containerd-0fdf09aa116a6e19a9632b59fe91e3490e21929e3cf48db50d1776622fc1c50f.scope - libcontainer container 0fdf09aa116a6e19a9632b59fe91e3490e21929e3cf48db50d1776622fc1c50f. Apr 28 00:16:15.097242 containerd[1482]: time="2026-04-28T00:16:15.097064132Z" level=info msg="StartContainer for \"0fdf09aa116a6e19a9632b59fe91e3490e21929e3cf48db50d1776622fc1c50f\" returns successfully" Apr 28 00:16:16.068952 kubelet[2593]: I0428 00:16:16.068476 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57d48d8695-95hvc" podStartSLOduration=1.77603559 podStartE2EDuration="8.068447419s" podCreationTimestamp="2026-04-28 00:16:08 +0000 UTC" firstStartedPulling="2026-04-28 00:16:08.702319046 +0000 UTC m=+37.087110851" lastFinishedPulling="2026-04-28 00:16:14.994730875 +0000 UTC m=+43.379522680" observedRunningTime="2026-04-28 00:16:16.067518498 +0000 UTC m=+44.452310343" watchObservedRunningTime="2026-04-28 00:16:16.068447419 +0000 UTC m=+44.453239264" Apr 28 00:16:16.691589 kubelet[2593]: I0428 00:16:16.691108 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:16:16.754948 containerd[1482]: time="2026-04-28T00:16:16.754824958Z" level=info msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.834 [INFO][4418] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.834 [INFO][4418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" iface="eth0" netns="/var/run/netns/cni-16d48212-09b4-58da-11e3-ca1a66042b4d" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.835 [INFO][4418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" iface="eth0" netns="/var/run/netns/cni-16d48212-09b4-58da-11e3-ca1a66042b4d" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.836 [INFO][4418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" iface="eth0" netns="/var/run/netns/cni-16d48212-09b4-58da-11e3-ca1a66042b4d" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.836 [INFO][4418] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.836 [INFO][4418] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.874 [INFO][4428] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.875 [INFO][4428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.875 [INFO][4428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.889 [WARNING][4428] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.889 [INFO][4428] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.893 [INFO][4428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:16.899557 containerd[1482]: 2026-04-28 00:16:16.897 [INFO][4418] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:16.901829 containerd[1482]: time="2026-04-28T00:16:16.900507817Z" level=info msg="TearDown network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" successfully" Apr 28 00:16:16.901829 containerd[1482]: time="2026-04-28T00:16:16.900928511Z" level=info msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" returns successfully" Apr 28 00:16:16.905520 systemd[1]: run-netns-cni\x2d16d48212\x2d09b4\x2d58da\x2d11e3\x2dca1a66042b4d.mount: Deactivated successfully. Apr 28 00:16:16.907740 containerd[1482]: time="2026-04-28T00:16:16.907688272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-fbwcj,Uid:3bfb0bf9-7964-4238-9b69-c54d44a2b3dc,Namespace:calico-system,Attempt:1,}" Apr 28 00:16:17.095893 systemd-networkd[1381]: cali91deac84bee: Link UP Apr 28 00:16:17.099739 systemd-networkd[1381]: cali91deac84bee: Gained carrier Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:16.973 [ERROR][4436] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:16.990 [INFO][4436] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0 calico-apiserver-57fccbd9fd- calico-system 3bfb0bf9-7964-4238-9b69-c54d44a2b3dc 944 0 2026-04-28 00:15:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57fccbd9fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 calico-apiserver-57fccbd9fd-fbwcj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali91deac84bee [] [] }} ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:16.990 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.021 [INFO][4462] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" HandleID="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.034 [INFO][4462] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" HandleID="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffc00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"calico-apiserver-57fccbd9fd-fbwcj", "timestamp":"2026-04-28 00:16:17.021808699 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b4f20)} Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.034 [INFO][4462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.034 [INFO][4462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.034 [INFO][4462] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.038 [INFO][4462] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.048 [INFO][4462] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.059 [INFO][4462] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.064 [INFO][4462] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.068 [INFO][4462] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.068 [INFO][4462] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.070 [INFO][4462] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1 Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.077 [INFO][4462] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.086 [INFO][4462] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.195/26] block=192.168.26.192/26 handle="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.087 [INFO][4462] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.195/26] handle="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.087 [INFO][4462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:17.126302 containerd[1482]: 2026-04-28 00:16:17.087 [INFO][4462] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.195/26] IPv6=[] ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" HandleID="k8s-pod-network.babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.091 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"calico-apiserver-57fccbd9fd-fbwcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91deac84bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.091 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.195/32] ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.091 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91deac84bee ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.099 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.101 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1", Pod:"calico-apiserver-57fccbd9fd-fbwcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91deac84bee", MAC:"82:db:de:c7:70:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:17.129251 containerd[1482]: 2026-04-28 00:16:17.121 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-fbwcj" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:17.160938 containerd[1482]: time="2026-04-28T00:16:17.160643373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:17.160938 containerd[1482]: time="2026-04-28T00:16:17.160756587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:17.160938 containerd[1482]: time="2026-04-28T00:16:17.160773229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:17.161218 containerd[1482]: time="2026-04-28T00:16:17.160953572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:17.189111 systemd[1]: Started cri-containerd-babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1.scope - libcontainer container babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1. Apr 28 00:16:17.237303 containerd[1482]: time="2026-04-28T00:16:17.237221336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-fbwcj,Uid:3bfb0bf9-7964-4238-9b69-c54d44a2b3dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1\"" Apr 28 00:16:17.240490 containerd[1482]: time="2026-04-28T00:16:17.240224235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 00:16:17.512880 kernel: calico-node[4519]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 28 00:16:17.756694 containerd[1482]: time="2026-04-28T00:16:17.755558354Z" level=info msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.832 [INFO][4555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.832 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" iface="eth0" netns="/var/run/netns/cni-16dea186-ce2b-285d-a0c7-def3f8e8cb0a" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.833 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" iface="eth0" netns="/var/run/netns/cni-16dea186-ce2b-285d-a0c7-def3f8e8cb0a" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.834 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" iface="eth0" netns="/var/run/netns/cni-16dea186-ce2b-285d-a0c7-def3f8e8cb0a" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.834 [INFO][4555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.834 [INFO][4555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.862 [INFO][4563] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.863 [INFO][4563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.863 [INFO][4563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.874 [WARNING][4563] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.875 [INFO][4563] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.879 [INFO][4563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:17.883934 containerd[1482]: 2026-04-28 00:16:17.880 [INFO][4555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:17.885281 containerd[1482]: time="2026-04-28T00:16:17.885168462Z" level=info msg="TearDown network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" successfully" Apr 28 00:16:17.885281 containerd[1482]: time="2026-04-28T00:16:17.885228069Z" level=info msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" returns successfully" Apr 28 00:16:17.889681 containerd[1482]: time="2026-04-28T00:16:17.889627345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-k8kd4,Uid:9fe175ef-faaa-484e-a8c4-f822ee0538e6,Namespace:calico-system,Attempt:1,}" Apr 28 00:16:17.903942 systemd[1]: run-netns-cni\x2d16dea186\x2dce2b\x2d285d\x2da0c7\x2ddef3f8e8cb0a.mount: Deactivated successfully. Apr 28 00:16:18.055126 systemd-networkd[1381]: vxlan.calico: Link UP Apr 28 00:16:18.055143 systemd-networkd[1381]: vxlan.calico: Gained carrier Apr 28 00:16:18.120617 systemd-networkd[1381]: cali912044893d7: Link UP Apr 28 00:16:18.122430 systemd-networkd[1381]: cali912044893d7: Gained carrier Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:17.974 [INFO][4573] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0 calico-apiserver-57fccbd9fd- calico-system 9fe175ef-faaa-484e-a8c4-f822ee0538e6 952 0 2026-04-28 00:15:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57fccbd9fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 calico-apiserver-57fccbd9fd-k8kd4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali912044893d7 [] [] }} ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:17.976 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.017 [INFO][4596] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" HandleID="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.032 [INFO][4596] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" HandleID="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026bdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"calico-apiserver-57fccbd9fd-k8kd4", "timestamp":"2026-04-28 00:16:18.017617028 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b7b80)} Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.032 [INFO][4596] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.032 [INFO][4596] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.032 [INFO][4596] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.041 [INFO][4596] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.060 [INFO][4596] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.079 [INFO][4596] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.083 [INFO][4596] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.087 [INFO][4596] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.088 [INFO][4596] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.091 [INFO][4596] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1 Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.103 [INFO][4596] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.115 [INFO][4596] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.196/26] block=192.168.26.192/26 handle="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.115 [INFO][4596] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.196/26] handle="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.115 [INFO][4596] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:18.143287 containerd[1482]: 2026-04-28 00:16:18.115 [INFO][4596] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.196/26] IPv6=[] ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" HandleID="k8s-pod-network.23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.118 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"9fe175ef-faaa-484e-a8c4-f822ee0538e6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"calico-apiserver-57fccbd9fd-k8kd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali912044893d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.119 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.196/32] ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.119 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali912044893d7 ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.122 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.122 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"9fe175ef-faaa-484e-a8c4-f822ee0538e6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1", Pod:"calico-apiserver-57fccbd9fd-k8kd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali912044893d7", MAC:"5e:fd:45:2b:dd:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:18.143957 containerd[1482]: 2026-04-28 00:16:18.138 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1" Namespace="calico-system" Pod="calico-apiserver-57fccbd9fd-k8kd4" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:18.173270 containerd[1482]: time="2026-04-28T00:16:18.172804492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:18.173270 containerd[1482]: time="2026-04-28T00:16:18.172912305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:18.173270 containerd[1482]: time="2026-04-28T00:16:18.172928347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:18.173658 containerd[1482]: time="2026-04-28T00:16:18.173141373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:18.209235 systemd[1]: Started cri-containerd-23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1.scope - libcontainer container 23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1. Apr 28 00:16:18.256323 containerd[1482]: time="2026-04-28T00:16:18.256251983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57fccbd9fd-k8kd4,Uid:9fe175ef-faaa-484e-a8c4-f822ee0538e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1\"" Apr 28 00:16:18.756420 containerd[1482]: time="2026-04-28T00:16:18.755364455Z" level=info msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" Apr 28 00:16:18.828344 systemd-networkd[1381]: cali91deac84bee: Gained IPv6LL Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.832 [INFO][4726] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.832 [INFO][4726] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" iface="eth0" netns="/var/run/netns/cni-fcd50247-1e6d-b75f-76cb-cb8406fb1069" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.832 [INFO][4726] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" iface="eth0" netns="/var/run/netns/cni-fcd50247-1e6d-b75f-76cb-cb8406fb1069" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.836 [INFO][4726] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" iface="eth0" netns="/var/run/netns/cni-fcd50247-1e6d-b75f-76cb-cb8406fb1069" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.836 [INFO][4726] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.836 [INFO][4726] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.856 [INFO][4734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.857 [INFO][4734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.857 [INFO][4734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.869 [WARNING][4734] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.869 [INFO][4734] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.871 [INFO][4734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:18.878213 containerd[1482]: 2026-04-28 00:16:18.874 [INFO][4726] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:18.881630 containerd[1482]: time="2026-04-28T00:16:18.881551196Z" level=info msg="TearDown network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" successfully" Apr 28 00:16:18.881630 containerd[1482]: time="2026-04-28T00:16:18.881589440Z" level=info msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" returns successfully" Apr 28 00:16:18.883575 systemd[1]: run-netns-cni\x2dfcd50247\x2d1e6d\x2db75f\x2d76cb\x2dcb8406fb1069.mount: Deactivated successfully. Apr 28 00:16:18.886058 containerd[1482]: time="2026-04-28T00:16:18.885519603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x848s,Uid:2b3bdf19-11df-48e0-a735-59ffdd0aa820,Namespace:kube-system,Attempt:1,}" Apr 28 00:16:19.075411 systemd-networkd[1381]: cali1adbdc14391: Link UP Apr 28 00:16:19.076032 systemd-networkd[1381]: cali1adbdc14391: Gained carrier Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:18.963 [INFO][4741] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0 coredns-66bc5c9577- kube-system 2b3bdf19-11df-48e0-a735-59ffdd0aa820 959 0 2026-04-28 00:15:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 coredns-66bc5c9577-x848s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1adbdc14391 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:18.963 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.002 [INFO][4753] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" HandleID="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.016 [INFO][4753] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" HandleID="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-n-d098215774", "pod":"coredns-66bc5c9577-x848s", "timestamp":"2026-04-28 00:16:19.002596419 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004f0f20)} Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.016 [INFO][4753] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.016 [INFO][4753] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.016 [INFO][4753] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.021 [INFO][4753] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.029 [INFO][4753] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.039 [INFO][4753] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.041 [INFO][4753] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.045 [INFO][4753] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.045 [INFO][4753] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.049 [INFO][4753] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.055 [INFO][4753] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.066 [INFO][4753] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.197/26] block=192.168.26.192/26 handle="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.066 [INFO][4753] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.197/26] handle="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.066 [INFO][4753] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:19.112094 containerd[1482]: 2026-04-28 00:16:19.066 [INFO][4753] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.197/26] IPv6=[] ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" HandleID="k8s-pod-network.cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.113555 containerd[1482]: 2026-04-28 00:16:19.070 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2b3bdf19-11df-48e0-a735-59ffdd0aa820", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"coredns-66bc5c9577-x848s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1adbdc14391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:19.113555 containerd[1482]: 2026-04-28 00:16:19.070 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.197/32] ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.113555 containerd[1482]: 2026-04-28 00:16:19.070 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1adbdc14391 ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.113555 containerd[1482]: 2026-04-28 00:16:19.083 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.113555 containerd[1482]: 2026-04-28 00:16:19.083 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2b3bdf19-11df-48e0-a735-59ffdd0aa820", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b", Pod:"coredns-66bc5c9577-x848s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1adbdc14391", MAC:"3e:1e:b7:95:61:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:19.113809 containerd[1482]: 2026-04-28 00:16:19.106 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b" Namespace="kube-system" Pod="coredns-66bc5c9577-x848s" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:19.142876 containerd[1482]: time="2026-04-28T00:16:19.142626549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:19.142876 containerd[1482]: time="2026-04-28T00:16:19.142757364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:19.143319 containerd[1482]: time="2026-04-28T00:16:19.142799249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:19.143319 containerd[1482]: time="2026-04-28T00:16:19.143199137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:19.169144 systemd[1]: Started cri-containerd-cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b.scope - libcontainer container cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b. Apr 28 00:16:19.218198 containerd[1482]: time="2026-04-28T00:16:19.218105686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x848s,Uid:2b3bdf19-11df-48e0-a735-59ffdd0aa820,Namespace:kube-system,Attempt:1,} returns sandbox id \"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b\"" Apr 28 00:16:19.229549 containerd[1482]: time="2026-04-28T00:16:19.229471084Z" level=info msg="CreateContainer within sandbox \"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:16:19.248239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970624195.mount: Deactivated successfully. Apr 28 00:16:19.251086 containerd[1482]: time="2026-04-28T00:16:19.250949090Z" level=info msg="CreateContainer within sandbox \"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8463fa1ad344e63881fbf188770e28e6228e42ac881eaa9e6e5c68211de70479\"" Apr 28 00:16:19.252764 containerd[1482]: time="2026-04-28T00:16:19.251873040Z" level=info msg="StartContainer for \"8463fa1ad344e63881fbf188770e28e6228e42ac881eaa9e6e5c68211de70479\"" Apr 28 00:16:19.282058 systemd[1]: Started cri-containerd-8463fa1ad344e63881fbf188770e28e6228e42ac881eaa9e6e5c68211de70479.scope - libcontainer container 8463fa1ad344e63881fbf188770e28e6228e42ac881eaa9e6e5c68211de70479. Apr 28 00:16:19.314113 containerd[1482]: time="2026-04-28T00:16:19.314038507Z" level=info msg="StartContainer for \"8463fa1ad344e63881fbf188770e28e6228e42ac881eaa9e6e5c68211de70479\" returns successfully" Apr 28 00:16:19.340933 systemd-networkd[1381]: cali912044893d7: Gained IPv6LL Apr 28 00:16:19.757771 containerd[1482]: time="2026-04-28T00:16:19.754942582Z" level=info msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" Apr 28 00:16:19.853933 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.814 [INFO][4858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.814 [INFO][4858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" iface="eth0" netns="/var/run/netns/cni-e9738682-b03f-e80d-ad06-ce79f9865472" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.814 [INFO][4858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" iface="eth0" netns="/var/run/netns/cni-e9738682-b03f-e80d-ad06-ce79f9865472" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.814 [INFO][4858] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" iface="eth0" netns="/var/run/netns/cni-e9738682-b03f-e80d-ad06-ce79f9865472" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.814 [INFO][4858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.815 [INFO][4858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.838 [INFO][4866] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.838 [INFO][4866] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.838 [INFO][4866] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.849 [WARNING][4866] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.849 [INFO][4866] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.852 [INFO][4866] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:19.858771 containerd[1482]: 2026-04-28 00:16:19.857 [INFO][4858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:19.859278 containerd[1482]: time="2026-04-28T00:16:19.859000374Z" level=info msg="TearDown network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" successfully" Apr 28 00:16:19.859278 containerd[1482]: time="2026-04-28T00:16:19.859032577Z" level=info msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" returns successfully" Apr 28 00:16:19.863708 containerd[1482]: time="2026-04-28T00:16:19.863251001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wphhk,Uid:962851dd-9192-4cce-9ab2-039449d01d65,Namespace:kube-system,Attempt:1,}" Apr 28 00:16:19.909191 systemd[1]: run-netns-cni\x2de9738682\x2db03f\x2de80d\x2dad06\x2dce79f9865472.mount: Deactivated successfully. Apr 28 00:16:20.048944 systemd-networkd[1381]: calif9a7c4f1a20: Link UP Apr 28 00:16:20.052966 systemd-networkd[1381]: calif9a7c4f1a20: Gained carrier Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.927 [INFO][4873] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0 coredns-66bc5c9577- kube-system 962851dd-9192-4cce-9ab2-039449d01d65 973 0 2026-04-28 00:15:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 coredns-66bc5c9577-wphhk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9a7c4f1a20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.927 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.962 [INFO][4885] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" HandleID="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.974 [INFO][4885] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" HandleID="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026bae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-n-d098215774", "pod":"coredns-66bc5c9577-wphhk", "timestamp":"2026-04-28 00:16:19.962469175 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400033cf20)} Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.975 [INFO][4885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.975 [INFO][4885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.975 [INFO][4885] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.979 [INFO][4885] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.987 [INFO][4885] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.995 [INFO][4885] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:19.999 [INFO][4885] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.003 [INFO][4885] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.003 [INFO][4885] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.006 [INFO][4885] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.017 [INFO][4885] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.036 [INFO][4885] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.198/26] block=192.168.26.192/26 handle="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.037 [INFO][4885] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.198/26] handle="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.037 [INFO][4885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:20.085272 containerd[1482]: 2026-04-28 00:16:20.041 [INFO][4885] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.198/26] IPv6=[] ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" HandleID="k8s-pod-network.3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.086111 containerd[1482]: 2026-04-28 00:16:20.044 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"962851dd-9192-4cce-9ab2-039449d01d65", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"coredns-66bc5c9577-wphhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a7c4f1a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:20.086111 containerd[1482]: 2026-04-28 00:16:20.044 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.198/32] ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.086111 containerd[1482]: 2026-04-28 00:16:20.044 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9a7c4f1a20 ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.086111 containerd[1482]: 2026-04-28 00:16:20.052 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.086111 containerd[1482]: 2026-04-28 00:16:20.053 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"962851dd-9192-4cce-9ab2-039449d01d65", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd", Pod:"coredns-66bc5c9577-wphhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a7c4f1a20", MAC:"26:fb:f0:ab:45:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:20.086275 containerd[1482]: 2026-04-28 00:16:20.080 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd" Namespace="kube-system" Pod="coredns-66bc5c9577-wphhk" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:20.108959 systemd-networkd[1381]: cali1adbdc14391: Gained IPv6LL Apr 28 00:16:20.154062 containerd[1482]: time="2026-04-28T00:16:20.153661774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:20.154062 containerd[1482]: time="2026-04-28T00:16:20.153735062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:20.154062 containerd[1482]: time="2026-04-28T00:16:20.153750944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:20.154062 containerd[1482]: time="2026-04-28T00:16:20.153921244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:20.203202 systemd[1]: Started cri-containerd-3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd.scope - libcontainer container 3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd. Apr 28 00:16:20.221455 kubelet[2593]: I0428 00:16:20.219661 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x848s" podStartSLOduration=43.219640928 podStartE2EDuration="43.219640928s" podCreationTimestamp="2026-04-28 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:16:20.171911336 +0000 UTC m=+48.556703181" watchObservedRunningTime="2026-04-28 00:16:20.219640928 +0000 UTC m=+48.604432733" Apr 28 00:16:20.298218 containerd[1482]: time="2026-04-28T00:16:20.298174502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wphhk,Uid:962851dd-9192-4cce-9ab2-039449d01d65,Namespace:kube-system,Attempt:1,} returns sandbox id \"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd\"" Apr 28 00:16:20.325930 containerd[1482]: time="2026-04-28T00:16:20.309679880Z" level=info msg="CreateContainer within sandbox \"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:16:20.351324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956490433.mount: Deactivated successfully. Apr 28 00:16:20.354956 containerd[1482]: time="2026-04-28T00:16:20.354804408Z" level=info msg="CreateContainer within sandbox \"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d8b6b2793348f033f000788e829221517807b8ba8042f200f152521676ee03e\"" Apr 28 00:16:20.355686 containerd[1482]: time="2026-04-28T00:16:20.355586419Z" level=info msg="StartContainer for \"6d8b6b2793348f033f000788e829221517807b8ba8042f200f152521676ee03e\"" Apr 28 00:16:20.404110 systemd[1]: Started cri-containerd-6d8b6b2793348f033f000788e829221517807b8ba8042f200f152521676ee03e.scope - libcontainer container 6d8b6b2793348f033f000788e829221517807b8ba8042f200f152521676ee03e. Apr 28 00:16:20.470546 containerd[1482]: time="2026-04-28T00:16:20.470414975Z" level=info msg="StartContainer for \"6d8b6b2793348f033f000788e829221517807b8ba8042f200f152521676ee03e\" returns successfully" Apr 28 00:16:20.755673 containerd[1482]: time="2026-04-28T00:16:20.754196941Z" level=info msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" Apr 28 00:16:20.757574 containerd[1482]: time="2026-04-28T00:16:20.757264457Z" level=info msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.843 [INFO][5007] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.843 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" iface="eth0" netns="/var/run/netns/cni-8e9bd436-4b6a-14e4-ea5a-3bb166a09649" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.843 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" iface="eth0" netns="/var/run/netns/cni-8e9bd436-4b6a-14e4-ea5a-3bb166a09649" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.844 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" iface="eth0" netns="/var/run/netns/cni-8e9bd436-4b6a-14e4-ea5a-3bb166a09649" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.844 [INFO][5007] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.844 [INFO][5007] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.913 [INFO][5020] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.913 [INFO][5020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.913 [INFO][5020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.929 [WARNING][5020] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.929 [INFO][5020] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.933 [INFO][5020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:20.937357 containerd[1482]: 2026-04-28 00:16:20.935 [INFO][5007] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:20.941479 containerd[1482]: time="2026-04-28T00:16:20.939004675Z" level=info msg="TearDown network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" successfully" Apr 28 00:16:20.941479 containerd[1482]: time="2026-04-28T00:16:20.939046320Z" level=info msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" returns successfully" Apr 28 00:16:20.940530 systemd[1]: run-netns-cni\x2d8e9bd436\x2d4b6a\x2d14e4\x2dea5a\x2d3bb166a09649.mount: Deactivated successfully. Apr 28 00:16:20.945450 containerd[1482]: time="2026-04-28T00:16:20.945225879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b95f8d-872zf,Uid:8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9,Namespace:calico-system,Attempt:1,}" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.861 [INFO][5006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.861 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" iface="eth0" netns="/var/run/netns/cni-ab0475d2-3b73-e778-8eb2-98e3e993248b" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.861 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" iface="eth0" netns="/var/run/netns/cni-ab0475d2-3b73-e778-8eb2-98e3e993248b" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.862 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" iface="eth0" netns="/var/run/netns/cni-ab0475d2-3b73-e778-8eb2-98e3e993248b" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.862 [INFO][5006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.862 [INFO][5006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.931 [INFO][5025] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.931 [INFO][5025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.933 [INFO][5025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.953 [WARNING][5025] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.953 [INFO][5025] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.956 [INFO][5025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:20.977019 containerd[1482]: 2026-04-28 00:16:20.970 [INFO][5006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:20.979821 systemd[1]: run-netns-cni\x2dab0475d2\x2d3b73\x2de778\x2d8eb2\x2d98e3e993248b.mount: Deactivated successfully. Apr 28 00:16:20.981070 containerd[1482]: time="2026-04-28T00:16:20.980985878Z" level=info msg="TearDown network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" successfully" Apr 28 00:16:20.981070 containerd[1482]: time="2026-04-28T00:16:20.981023842Z" level=info msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" returns successfully" Apr 28 00:16:20.986041 containerd[1482]: time="2026-04-28T00:16:20.986008782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-57sjc,Uid:b64f11de-3264-4d46-8dd1-0c473c7b70ea,Namespace:calico-system,Attempt:1,}" Apr 28 00:16:21.221952 systemd-networkd[1381]: cali525e838ac30: Link UP Apr 28 00:16:21.222611 systemd-networkd[1381]: cali525e838ac30: Gained carrier Apr 28 00:16:21.241969 kubelet[2593]: I0428 00:16:21.241319 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wphhk" podStartSLOduration=44.241287203 podStartE2EDuration="44.241287203s" podCreationTimestamp="2026-04-28 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:16:21.172408276 +0000 UTC m=+49.557200081" watchObservedRunningTime="2026-04-28 00:16:21.241287203 +0000 UTC m=+49.626079048" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.038 [INFO][5033] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0 calico-kube-controllers-6d9b95f8d- calico-system 8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9 994 0 2026-04-28 00:15:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d9b95f8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 calico-kube-controllers-6d9b95f8d-872zf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali525e838ac30 [] [] }} ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.038 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.109 [INFO][5057] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" HandleID="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.132 [INFO][5057] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" HandleID="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f61a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"calico-kube-controllers-6d9b95f8d-872zf", "timestamp":"2026-04-28 00:16:21.109565994 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000188dc0)} Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.132 [INFO][5057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.132 [INFO][5057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.132 [INFO][5057] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.138 [INFO][5057] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.153 [INFO][5057] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.163 [INFO][5057] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.167 [INFO][5057] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.175 [INFO][5057] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.175 [INFO][5057] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.184 [INFO][5057] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.195 [INFO][5057] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.209 [INFO][5057] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.199/26] block=192.168.26.192/26 handle="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.209 [INFO][5057] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.199/26] handle="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.209 [INFO][5057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:21.247427 containerd[1482]: 2026-04-28 00:16:21.209 [INFO][5057] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.199/26] IPv6=[] ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" HandleID="k8s-pod-network.f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.214 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0", GenerateName:"calico-kube-controllers-6d9b95f8d-", Namespace:"calico-system", SelfLink:"", UID:"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b95f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"calico-kube-controllers-6d9b95f8d-872zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali525e838ac30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.214 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.199/32] ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.214 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali525e838ac30 ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.220 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.223 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0", GenerateName:"calico-kube-controllers-6d9b95f8d-", Namespace:"calico-system", SelfLink:"", UID:"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b95f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb", Pod:"calico-kube-controllers-6d9b95f8d-872zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali525e838ac30", MAC:"c6:bc:ab:32:61:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:21.250293 containerd[1482]: 2026-04-28 00:16:21.239 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb" Namespace="calico-system" Pod="calico-kube-controllers-6d9b95f8d-872zf" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:21.315836 containerd[1482]: time="2026-04-28T00:16:21.315260348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:21.315836 containerd[1482]: time="2026-04-28T00:16:21.315317234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:21.315836 containerd[1482]: time="2026-04-28T00:16:21.315328796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:21.316936 containerd[1482]: time="2026-04-28T00:16:21.315954427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:21.338657 systemd-networkd[1381]: calia269d3fd411: Link UP Apr 28 00:16:21.340923 systemd-networkd[1381]: calia269d3fd411: Gained carrier Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.082 [INFO][5044] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0 goldmane-6b4b7f4496- calico-system b64f11de-3264-4d46-8dd1-0c473c7b70ea 995 0 2026-04-28 00:15:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:6b4b7f4496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-n-d098215774 goldmane-6b4b7f4496-57sjc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia269d3fd411 [] [] }} ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.082 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.167 [INFO][5064] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" HandleID="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.188 [INFO][5064] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" HandleID="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-d098215774", "pod":"goldmane-6b4b7f4496-57sjc", "timestamp":"2026-04-28 00:16:21.167733667 +0000 UTC"}, Hostname:"ci-4081-3-7-n-d098215774", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e8000)} Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.188 [INFO][5064] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.210 [INFO][5064] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.210 [INFO][5064] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-d098215774' Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.237 [INFO][5064] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.254 [INFO][5064] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.267 [INFO][5064] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.272 [INFO][5064] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.283 [INFO][5064] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.283 [INFO][5064] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.287 [INFO][5064] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929 Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.301 [INFO][5064] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.320 [INFO][5064] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.200/26] block=192.168.26.192/26 handle="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.320 [INFO][5064] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.200/26] handle="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" host="ci-4081-3-7-n-d098215774" Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.320 [INFO][5064] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:21.374670 containerd[1482]: 2026-04-28 00:16:21.320 [INFO][5064] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.200/26] IPv6=[] ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" HandleID="k8s-pod-network.350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.332 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"b64f11de-3264-4d46-8dd1-0c473c7b70ea", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"", Pod:"goldmane-6b4b7f4496-57sjc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia269d3fd411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.332 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.200/32] ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.332 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia269d3fd411 ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.339 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.342 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"b64f11de-3264-4d46-8dd1-0c473c7b70ea", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929", Pod:"goldmane-6b4b7f4496-57sjc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia269d3fd411", MAC:"82:ea:0d:85:3f:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:21.375332 containerd[1482]: 2026-04-28 00:16:21.362 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929" Namespace="calico-system" Pod="goldmane-6b4b7f4496-57sjc" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:21.377365 systemd[1]: Started cri-containerd-f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb.scope - libcontainer container f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb. Apr 28 00:16:21.426136 containerd[1482]: time="2026-04-28T00:16:21.426030863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:16:21.426969 containerd[1482]: time="2026-04-28T00:16:21.426934005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:16:21.428478 containerd[1482]: time="2026-04-28T00:16:21.428448777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:21.428880 containerd[1482]: time="2026-04-28T00:16:21.428776574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:16:21.445760 containerd[1482]: time="2026-04-28T00:16:21.445374015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:21.451761 containerd[1482]: time="2026-04-28T00:16:21.451683571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=42617669" Apr 28 00:16:21.452093 systemd-networkd[1381]: calif9a7c4f1a20: Gained IPv6LL Apr 28 00:16:21.471449 containerd[1482]: time="2026-04-28T00:16:21.470666962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b95f8d-872zf,Uid:8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb\"" Apr 28 00:16:21.478094 containerd[1482]: time="2026-04-28T00:16:21.477301554Z" level=info msg="ImageCreate event name:\"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:21.480727 systemd[1]: Started cri-containerd-350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929.scope - libcontainer container 350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929. Apr 28 00:16:21.488487 containerd[1482]: time="2026-04-28T00:16:21.487787583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:21.488487 containerd[1482]: time="2026-04-28T00:16:21.488374809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 4.248105689s" Apr 28 00:16:21.488487 containerd[1482]: time="2026-04-28T00:16:21.488403253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 28 00:16:21.491248 containerd[1482]: time="2026-04-28T00:16:21.491219412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 00:16:21.495799 containerd[1482]: time="2026-04-28T00:16:21.495757206Z" level=info msg="CreateContainer within sandbox \"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 00:16:21.519782 containerd[1482]: time="2026-04-28T00:16:21.519734604Z" level=info msg="CreateContainer within sandbox \"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fbc237883e817b3f3352e594297093245be8bad78aa9d712e3eeb7a3ef17744f\"" Apr 28 00:16:21.525701 containerd[1482]: time="2026-04-28T00:16:21.523691892Z" level=info msg="StartContainer for \"fbc237883e817b3f3352e594297093245be8bad78aa9d712e3eeb7a3ef17744f\"" Apr 28 00:16:21.545517 containerd[1482]: time="2026-04-28T00:16:21.545008068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-57sjc,Uid:b64f11de-3264-4d46-8dd1-0c473c7b70ea,Namespace:calico-system,Attempt:1,} returns sandbox id \"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929\"" Apr 28 00:16:21.570066 systemd[1]: Started cri-containerd-fbc237883e817b3f3352e594297093245be8bad78aa9d712e3eeb7a3ef17744f.scope - libcontainer container fbc237883e817b3f3352e594297093245be8bad78aa9d712e3eeb7a3ef17744f. Apr 28 00:16:21.615004 containerd[1482]: time="2026-04-28T00:16:21.614950196Z" level=info msg="StartContainer for \"fbc237883e817b3f3352e594297093245be8bad78aa9d712e3eeb7a3ef17744f\" returns successfully" Apr 28 00:16:21.991094 containerd[1482]: time="2026-04-28T00:16:21.991021741Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:21.995090 containerd[1482]: time="2026-04-28T00:16:21.995031316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 28 00:16:21.998303 containerd[1482]: time="2026-04-28T00:16:21.998240799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 506.822685ms" Apr 28 00:16:21.998385 containerd[1482]: time="2026-04-28T00:16:21.998304727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 28 00:16:22.001107 containerd[1482]: time="2026-04-28T00:16:22.001063079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 28 00:16:22.012751 containerd[1482]: time="2026-04-28T00:16:22.012706809Z" level=info msg="CreateContainer within sandbox \"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 00:16:22.035583 containerd[1482]: time="2026-04-28T00:16:22.035446003Z" level=info msg="CreateContainer within sandbox \"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59ac1c7bae0d0e48e8e2240bf0601304e2d50494ca972cf5ff9a648e32f42f7d\"" Apr 28 00:16:22.039817 containerd[1482]: time="2026-04-28T00:16:22.038372607Z" level=info msg="StartContainer for \"59ac1c7bae0d0e48e8e2240bf0601304e2d50494ca972cf5ff9a648e32f42f7d\"" Apr 28 00:16:22.094067 systemd[1]: Started cri-containerd-59ac1c7bae0d0e48e8e2240bf0601304e2d50494ca972cf5ff9a648e32f42f7d.scope - libcontainer container 59ac1c7bae0d0e48e8e2240bf0601304e2d50494ca972cf5ff9a648e32f42f7d. Apr 28 00:16:22.142917 containerd[1482]: time="2026-04-28T00:16:22.142801113Z" level=info msg="StartContainer for \"59ac1c7bae0d0e48e8e2240bf0601304e2d50494ca972cf5ff9a648e32f42f7d\" returns successfully" Apr 28 00:16:22.206090 kubelet[2593]: I0428 00:16:22.206009 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57fccbd9fd-k8kd4" podStartSLOduration=29.466188799 podStartE2EDuration="33.205988899s" podCreationTimestamp="2026-04-28 00:15:49 +0000 UTC" firstStartedPulling="2026-04-28 00:16:18.259315399 +0000 UTC m=+46.644107164" lastFinishedPulling="2026-04-28 00:16:21.999115339 +0000 UTC m=+50.383907264" observedRunningTime="2026-04-28 00:16:22.183627387 +0000 UTC m=+50.568419192" watchObservedRunningTime="2026-04-28 00:16:22.205988899 +0000 UTC m=+50.590780704" Apr 28 00:16:22.206336 kubelet[2593]: I0428 00:16:22.206168 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57fccbd9fd-fbwcj" podStartSLOduration=28.956424081 podStartE2EDuration="33.206161838s" podCreationTimestamp="2026-04-28 00:15:49 +0000 UTC" firstStartedPulling="2026-04-28 00:16:17.239957442 +0000 UTC m=+45.624749247" lastFinishedPulling="2026-04-28 00:16:21.489695199 +0000 UTC m=+49.874487004" observedRunningTime="2026-04-28 00:16:22.202966325 +0000 UTC m=+50.587758130" watchObservedRunningTime="2026-04-28 00:16:22.206161838 +0000 UTC m=+50.590953643" Apr 28 00:16:22.476196 systemd-networkd[1381]: cali525e838ac30: Gained IPv6LL Apr 28 00:16:22.605351 systemd-networkd[1381]: calia269d3fd411: Gained IPv6LL Apr 28 00:16:23.171192 kubelet[2593]: I0428 00:16:23.171150 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:16:23.171784 kubelet[2593]: I0428 00:16:23.171612 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:16:24.412368 containerd[1482]: time="2026-04-28T00:16:24.412290917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:24.413758 containerd[1482]: time="2026-04-28T00:16:24.413539289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=46169343" Apr 28 00:16:24.414975 containerd[1482]: time="2026-04-28T00:16:24.414838946Z" level=info msg="ImageCreate event name:\"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:24.418484 containerd[1482]: time="2026-04-28T00:16:24.418146055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:24.419095 containerd[1482]: time="2026-04-28T00:16:24.419055191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"48744950\" in 2.417935985s" Apr 28 00:16:24.419154 containerd[1482]: time="2026-04-28T00:16:24.419096315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\"" Apr 28 00:16:24.423656 containerd[1482]: time="2026-04-28T00:16:24.422361939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 28 00:16:24.446801 containerd[1482]: time="2026-04-28T00:16:24.446634461Z" level=info msg="CreateContainer within sandbox \"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 28 00:16:24.461736 containerd[1482]: time="2026-04-28T00:16:24.461025019Z" level=info msg="CreateContainer within sandbox \"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"309752876c2735c53656e37e257d5b940ff35e6cb21446ea05857803de2d38ed\"" Apr 28 00:16:24.466172 containerd[1482]: time="2026-04-28T00:16:24.466120037Z" level=info msg="StartContainer for \"309752876c2735c53656e37e257d5b940ff35e6cb21446ea05857803de2d38ed\"" Apr 28 00:16:24.536189 systemd[1]: Started cri-containerd-309752876c2735c53656e37e257d5b940ff35e6cb21446ea05857803de2d38ed.scope - libcontainer container 309752876c2735c53656e37e257d5b940ff35e6cb21446ea05857803de2d38ed. Apr 28 00:16:24.575432 containerd[1482]: time="2026-04-28T00:16:24.575267914Z" level=info msg="StartContainer for \"309752876c2735c53656e37e257d5b940ff35e6cb21446ea05857803de2d38ed\" returns successfully" Apr 28 00:16:25.203092 kubelet[2593]: I0428 00:16:25.202280 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d9b95f8d-872zf" podStartSLOduration=31.264231409 podStartE2EDuration="34.202261131s" podCreationTimestamp="2026-04-28 00:15:51 +0000 UTC" firstStartedPulling="2026-04-28 00:16:21.482702726 +0000 UTC m=+49.867494491" lastFinishedPulling="2026-04-28 00:16:24.420732368 +0000 UTC m=+52.805524213" observedRunningTime="2026-04-28 00:16:25.199549891 +0000 UTC m=+53.584341736" watchObservedRunningTime="2026-04-28 00:16:25.202261131 +0000 UTC m=+53.587052936" Apr 28 00:16:26.900827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453920980.mount: Deactivated successfully. Apr 28 00:16:27.165229 kubelet[2593]: I0428 00:16:27.163953 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:16:27.977228 containerd[1482]: time="2026-04-28T00:16:27.977179693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:27.979643 containerd[1482]: time="2026-04-28T00:16:27.979608293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=48513326" Apr 28 00:16:27.980825 containerd[1482]: time="2026-04-28T00:16:27.980737845Z" level=info msg="ImageCreate event name:\"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:27.984573 containerd[1482]: time="2026-04-28T00:16:27.984514139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:16:27.986079 containerd[1482]: time="2026-04-28T00:16:27.986033850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"48513172\" in 3.563624585s" Apr 28 00:16:27.986222 containerd[1482]: time="2026-04-28T00:16:27.986200506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\"" Apr 28 00:16:27.990444 containerd[1482]: time="2026-04-28T00:16:27.990404483Z" level=info msg="CreateContainer within sandbox \"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 28 00:16:28.014278 containerd[1482]: time="2026-04-28T00:16:28.014199135Z" level=info msg="CreateContainer within sandbox \"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc\"" Apr 28 00:16:28.017902 containerd[1482]: time="2026-04-28T00:16:28.016051435Z" level=info msg="StartContainer for \"48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc\"" Apr 28 00:16:28.063090 systemd[1]: Started cri-containerd-48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc.scope - libcontainer container 48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc. Apr 28 00:16:28.102489 containerd[1482]: time="2026-04-28T00:16:28.102403547Z" level=info msg="StartContainer for \"48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc\" returns successfully" Apr 28 00:16:31.724077 systemd[1]: Started sshd@10-128.140.91.51:22-223.233.87.33:19569.service - OpenSSH per-connection server daemon (223.233.87.33:19569). Apr 28 00:16:31.765031 containerd[1482]: time="2026-04-28T00:16:31.764979479Z" level=info msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.816 [WARNING][5454] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"b64f11de-3264-4d46-8dd1-0c473c7b70ea", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929", Pod:"goldmane-6b4b7f4496-57sjc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia269d3fd411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.817 [INFO][5454] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.817 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" iface="eth0" netns="" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.817 [INFO][5454] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.817 [INFO][5454] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.846 [INFO][5461] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.846 [INFO][5461] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.846 [INFO][5461] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.857 [WARNING][5461] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.857 [INFO][5461] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.859 [INFO][5461] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:31.863392 containerd[1482]: 2026-04-28 00:16:31.861 [INFO][5454] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.865390 containerd[1482]: time="2026-04-28T00:16:31.863422476Z" level=info msg="TearDown network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" successfully" Apr 28 00:16:31.865390 containerd[1482]: time="2026-04-28T00:16:31.863450438Z" level=info msg="StopPodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" returns successfully" Apr 28 00:16:31.865390 containerd[1482]: time="2026-04-28T00:16:31.865233803Z" level=info msg="RemovePodSandbox for \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" Apr 28 00:16:31.865390 containerd[1482]: time="2026-04-28T00:16:31.865282727Z" level=info msg="Forcibly stopping sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\"" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.910 [WARNING][5476] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"b64f11de-3264-4d46-8dd1-0c473c7b70ea", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"350baaf778e9b04a824ef651c20f9072164d93232465c28dcefb024f0c53e929", Pod:"goldmane-6b4b7f4496-57sjc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia269d3fd411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.910 [INFO][5476] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.910 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" iface="eth0" netns="" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.910 [INFO][5476] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.910 [INFO][5476] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.934 [INFO][5483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.935 [INFO][5483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.935 [INFO][5483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.947 [WARNING][5483] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.947 [INFO][5483] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" HandleID="k8s-pod-network.8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Workload="ci--4081--3--7--n--d098215774-k8s-goldmane--6b4b7f4496--57sjc-eth0" Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.949 [INFO][5483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:31.954949 containerd[1482]: 2026-04-28 00:16:31.953 [INFO][5476] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96" Apr 28 00:16:31.956705 containerd[1482]: time="2026-04-28T00:16:31.954968276Z" level=info msg="TearDown network for sandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" successfully" Apr 28 00:16:31.961143 containerd[1482]: time="2026-04-28T00:16:31.961056198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:31.961244 containerd[1482]: time="2026-04-28T00:16:31.961191130Z" level=info msg="RemovePodSandbox \"8bdc08e06d6700c67a9bc1462148570f33108f64ba475b9069fa37b71d118f96\" returns successfully" Apr 28 00:16:31.962164 containerd[1482]: time="2026-04-28T00:16:31.961873393Z" level=info msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.018 [WARNING][5497] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2b3bdf19-11df-48e0-a735-59ffdd0aa820", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b", Pod:"coredns-66bc5c9577-x848s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1adbdc14391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.019 [INFO][5497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.019 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" iface="eth0" netns="" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.019 [INFO][5497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.019 [INFO][5497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.041 [INFO][5507] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.041 [INFO][5507] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.041 [INFO][5507] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.053 [WARNING][5507] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.053 [INFO][5507] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.056 [INFO][5507] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.061025 containerd[1482]: 2026-04-28 00:16:32.058 [INFO][5497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.061025 containerd[1482]: time="2026-04-28T00:16:32.061001446Z" level=info msg="TearDown network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" successfully" Apr 28 00:16:32.061794 containerd[1482]: time="2026-04-28T00:16:32.061030729Z" level=info msg="StopPodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" returns successfully" Apr 28 00:16:32.061794 containerd[1482]: time="2026-04-28T00:16:32.061751674Z" level=info msg="RemovePodSandbox for \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" Apr 28 00:16:32.061794 containerd[1482]: time="2026-04-28T00:16:32.061783237Z" level=info msg="Forcibly stopping sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\"" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.113 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2b3bdf19-11df-48e0-a735-59ffdd0aa820", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"cca29fc61d318d44c749ab76761e66ca6078bb99f4289a9a596a7d325a7e834b", Pod:"coredns-66bc5c9577-x848s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1adbdc14391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.114 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.114 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" iface="eth0" netns="" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.114 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.114 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.148 [INFO][5528] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.149 [INFO][5528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.149 [INFO][5528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.160 [WARNING][5528] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.160 [INFO][5528] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" HandleID="k8s-pod-network.649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--x848s-eth0" Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.162 [INFO][5528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.166900 containerd[1482]: 2026-04-28 00:16:32.164 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd" Apr 28 00:16:32.166900 containerd[1482]: time="2026-04-28T00:16:32.166892535Z" level=info msg="TearDown network for sandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" successfully" Apr 28 00:16:32.176149 containerd[1482]: time="2026-04-28T00:16:32.176082369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:32.176295 containerd[1482]: time="2026-04-28T00:16:32.176214381Z" level=info msg="RemovePodSandbox \"649b494fd8af24247251cb575151384ffbcce31c6b3f2416f2157c2ffb22b6bd\" returns successfully" Apr 28 00:16:32.176919 containerd[1482]: time="2026-04-28T00:16:32.176702545Z" level=info msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.224 [WARNING][5542] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"962851dd-9192-4cce-9ab2-039449d01d65", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd", Pod:"coredns-66bc5c9577-wphhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a7c4f1a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.225 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.225 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" iface="eth0" netns="" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.225 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.225 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.250 [INFO][5549] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.250 [INFO][5549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.250 [INFO][5549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.263 [WARNING][5549] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.263 [INFO][5549] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.265 [INFO][5549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.272101 containerd[1482]: 2026-04-28 00:16:32.268 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.274014 containerd[1482]: time="2026-04-28T00:16:32.272804546Z" level=info msg="TearDown network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" successfully" Apr 28 00:16:32.274014 containerd[1482]: time="2026-04-28T00:16:32.272886753Z" level=info msg="StopPodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" returns successfully" Apr 28 00:16:32.274014 containerd[1482]: time="2026-04-28T00:16:32.273552614Z" level=info msg="RemovePodSandbox for \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" Apr 28 00:16:32.274014 containerd[1482]: time="2026-04-28T00:16:32.273591817Z" level=info msg="Forcibly stopping sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\"" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.318 [WARNING][5563] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"962851dd-9192-4cce-9ab2-039449d01d65", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"3ad92a14fe0c7593cce3ba6d41799fdfaaba47da7257dcb1212402c1cbe192fd", Pod:"coredns-66bc5c9577-wphhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9a7c4f1a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.319 [INFO][5563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.319 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" iface="eth0" netns="" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.319 [INFO][5563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.319 [INFO][5563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.347 [INFO][5570] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.348 [INFO][5570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.348 [INFO][5570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.362 [WARNING][5570] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.362 [INFO][5570] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" HandleID="k8s-pod-network.8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Workload="ci--4081--3--7--n--d098215774-k8s-coredns--66bc5c9577--wphhk-eth0" Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.365 [INFO][5570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.369645 containerd[1482]: 2026-04-28 00:16:32.367 [INFO][5563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee" Apr 28 00:16:32.370966 containerd[1482]: time="2026-04-28T00:16:32.370548655Z" level=info msg="TearDown network for sandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" successfully" Apr 28 00:16:32.375679 containerd[1482]: time="2026-04-28T00:16:32.375605914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:32.375679 containerd[1482]: time="2026-04-28T00:16:32.375687682Z" level=info msg="RemovePodSandbox \"8f28c02ff70cceb08f489986da8cc874418d7b7024106f5b94d42fa6ab0095ee\" returns successfully" Apr 28 00:16:32.377019 containerd[1482]: time="2026-04-28T00:16:32.376331100Z" level=info msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.423 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"9fe175ef-faaa-484e-a8c4-f822ee0538e6", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1", Pod:"calico-apiserver-57fccbd9fd-k8kd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali912044893d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.423 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.423 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" iface="eth0" netns="" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.423 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.423 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.451 [INFO][5592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.451 [INFO][5592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.452 [INFO][5592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.463 [WARNING][5592] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.463 [INFO][5592] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.468 [INFO][5592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.473584 containerd[1482]: 2026-04-28 00:16:32.471 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.473584 containerd[1482]: time="2026-04-28T00:16:32.473112922Z" level=info msg="TearDown network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" successfully" Apr 28 00:16:32.473584 containerd[1482]: time="2026-04-28T00:16:32.473150686Z" level=info msg="StopPodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" returns successfully" Apr 28 00:16:32.474079 containerd[1482]: time="2026-04-28T00:16:32.473868111Z" level=info msg="RemovePodSandbox for \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" Apr 28 00:16:32.474079 containerd[1482]: time="2026-04-28T00:16:32.473911355Z" level=info msg="Forcibly stopping sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\"" Apr 28 00:16:32.561591 sshd[5442]: Received disconnect from 223.233.87.33 port 19569:11: Bye Bye [preauth] Apr 28 00:16:32.561591 sshd[5442]: Disconnected from authenticating user root 223.233.87.33 port 19569 [preauth] Apr 28 00:16:32.565729 systemd[1]: sshd@10-128.140.91.51:22-223.233.87.33:19569.service: Deactivated successfully. Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.522 [WARNING][5606] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"9fe175ef-faaa-484e-a8c4-f822ee0538e6", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"23cd7eeed5b8ae84ef9619fd7e9424ccd8de49ae072a29d794fb064095ca28d1", Pod:"calico-apiserver-57fccbd9fd-k8kd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali912044893d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.523 [INFO][5606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.523 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" iface="eth0" netns="" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.523 [INFO][5606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.523 [INFO][5606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.546 [INFO][5614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.546 [INFO][5614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.546 [INFO][5614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.556 [WARNING][5614] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.556 [INFO][5614] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" HandleID="k8s-pod-network.7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--k8kd4-eth0" Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.559 [INFO][5614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.567455 containerd[1482]: 2026-04-28 00:16:32.563 [INFO][5606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c" Apr 28 00:16:32.568885 containerd[1482]: time="2026-04-28T00:16:32.567522449Z" level=info msg="TearDown network for sandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" successfully" Apr 28 00:16:32.572672 containerd[1482]: time="2026-04-28T00:16:32.572622312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:32.572978 containerd[1482]: time="2026-04-28T00:16:32.572871935Z" level=info msg="RemovePodSandbox \"7929b3e5ba6ebb9d5cdef6dd90f46f4868deea921b7351c8c37afec6ee5d774c\" returns successfully" Apr 28 00:16:32.573556 containerd[1482]: time="2026-04-28T00:16:32.573515913Z" level=info msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.616 [WARNING][5630] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.616 [INFO][5630] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.616 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" iface="eth0" netns="" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.616 [INFO][5630] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.616 [INFO][5630] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.639 [INFO][5638] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.640 [INFO][5638] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.640 [INFO][5638] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.652 [WARNING][5638] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.652 [INFO][5638] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.654 [INFO][5638] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.659032 containerd[1482]: 2026-04-28 00:16:32.656 [INFO][5630] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.659032 containerd[1482]: time="2026-04-28T00:16:32.658868858Z" level=info msg="TearDown network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" successfully" Apr 28 00:16:32.659032 containerd[1482]: time="2026-04-28T00:16:32.658898741Z" level=info msg="StopPodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" returns successfully" Apr 28 00:16:32.661889 containerd[1482]: time="2026-04-28T00:16:32.659467593Z" level=info msg="RemovePodSandbox for \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" Apr 28 00:16:32.661889 containerd[1482]: time="2026-04-28T00:16:32.659500996Z" level=info msg="Forcibly stopping sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\"" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.712 [WARNING][5652] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" WorkloadEndpoint="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.712 [INFO][5652] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.712 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" iface="eth0" netns="" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.712 [INFO][5652] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.713 [INFO][5652] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.739 [INFO][5659] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.740 [INFO][5659] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.740 [INFO][5659] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.750 [WARNING][5659] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.750 [INFO][5659] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" HandleID="k8s-pod-network.a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Workload="ci--4081--3--7--n--d098215774-k8s-whisker--778646646c--g6l5c-eth0" Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.753 [INFO][5659] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.757627 containerd[1482]: 2026-04-28 00:16:32.755 [INFO][5652] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef" Apr 28 00:16:32.758284 containerd[1482]: time="2026-04-28T00:16:32.758205992Z" level=info msg="TearDown network for sandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" successfully" Apr 28 00:16:32.762897 containerd[1482]: time="2026-04-28T00:16:32.762678358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:32.762897 containerd[1482]: time="2026-04-28T00:16:32.762787848Z" level=info msg="RemovePodSandbox \"a042dd0939982b6ee31a76a518e880e26a69424db72ec1c2bc1ace56766607ef\" returns successfully" Apr 28 00:16:32.763876 containerd[1482]: time="2026-04-28T00:16:32.763722773Z" level=info msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.803 [WARNING][5673] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0", GenerateName:"calico-kube-controllers-6d9b95f8d-", Namespace:"calico-system", SelfLink:"", UID:"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b95f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb", Pod:"calico-kube-controllers-6d9b95f8d-872zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali525e838ac30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.803 [INFO][5673] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.803 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" iface="eth0" netns="" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.803 [INFO][5673] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.803 [INFO][5673] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.826 [INFO][5680] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.826 [INFO][5680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.826 [INFO][5680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.836 [WARNING][5680] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.837 [INFO][5680] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.839 [INFO][5680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.842939 containerd[1482]: 2026-04-28 00:16:32.841 [INFO][5673] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.844471 containerd[1482]: time="2026-04-28T00:16:32.843815241Z" level=info msg="TearDown network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" successfully" Apr 28 00:16:32.844471 containerd[1482]: time="2026-04-28T00:16:32.843962534Z" level=info msg="StopPodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" returns successfully" Apr 28 00:16:32.844780 containerd[1482]: time="2026-04-28T00:16:32.844571229Z" level=info msg="RemovePodSandbox for \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" Apr 28 00:16:32.844780 containerd[1482]: time="2026-04-28T00:16:32.844603792Z" level=info msg="Forcibly stopping sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\"" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.887 [WARNING][5695] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0", GenerateName:"calico-kube-controllers-6d9b95f8d-", Namespace:"calico-system", SelfLink:"", UID:"8d7f6caa-ecef-41c4-8a1f-7174e0b0ebd9", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b95f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"f1649f4b8b6fd923f69e5f87da7381f48f9a736a5902881ecf8df978b20992fb", Pod:"calico-kube-controllers-6d9b95f8d-872zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali525e838ac30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.888 [INFO][5695] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.888 [INFO][5695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" iface="eth0" netns="" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.888 [INFO][5695] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.888 [INFO][5695] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.911 [INFO][5703] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.911 [INFO][5703] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.911 [INFO][5703] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.924 [WARNING][5703] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.924 [INFO][5703] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" HandleID="k8s-pod-network.9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Workload="ci--4081--3--7--n--d098215774-k8s-calico--kube--controllers--6d9b95f8d--872zf-eth0" Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.928 [INFO][5703] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:32.932576 containerd[1482]: 2026-04-28 00:16:32.930 [INFO][5695] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854" Apr 28 00:16:32.932576 containerd[1482]: time="2026-04-28T00:16:32.932363716Z" level=info msg="TearDown network for sandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" successfully" Apr 28 00:16:32.938938 containerd[1482]: time="2026-04-28T00:16:32.938257931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:32.938938 containerd[1482]: time="2026-04-28T00:16:32.938345179Z" level=info msg="RemovePodSandbox \"9546c2ceafe3a8b9aeaadb67fdc27ce00bcc2642e6b61f4e779998ce94381854\" returns successfully" Apr 28 00:16:32.938938 containerd[1482]: time="2026-04-28T00:16:32.938926671Z" level=info msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:32.983 [WARNING][5718] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1", Pod:"calico-apiserver-57fccbd9fd-fbwcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91deac84bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:32.985 [INFO][5718] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:32.985 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" iface="eth0" netns="" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:32.985 [INFO][5718] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:32.985 [INFO][5718] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.009 [INFO][5725] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.009 [INFO][5725] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.009 [INFO][5725] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.021 [WARNING][5725] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.021 [INFO][5725] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.023 [INFO][5725] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:33.027708 containerd[1482]: 2026-04-28 00:16:33.025 [INFO][5718] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.028249 containerd[1482]: time="2026-04-28T00:16:33.027758495Z" level=info msg="TearDown network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" successfully" Apr 28 00:16:33.028249 containerd[1482]: time="2026-04-28T00:16:33.027790618Z" level=info msg="StopPodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" returns successfully" Apr 28 00:16:33.028438 containerd[1482]: time="2026-04-28T00:16:33.028318025Z" level=info msg="RemovePodSandbox for \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" Apr 28 00:16:33.028494 containerd[1482]: time="2026-04-28T00:16:33.028450797Z" level=info msg="Forcibly stopping sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\"" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.076 [WARNING][5739] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0", GenerateName:"calico-apiserver-57fccbd9fd-", Namespace:"calico-system", SelfLink:"", UID:"3bfb0bf9-7964-4238-9b69-c54d44a2b3dc", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57fccbd9fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-d098215774", ContainerID:"babcdf0d02b6834ce5083a6cc98a18ac9cf857b1487099b4df90a8e71b2925d1", Pod:"calico-apiserver-57fccbd9fd-fbwcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali91deac84bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.076 [INFO][5739] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.076 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" iface="eth0" netns="" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.076 [INFO][5739] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.076 [INFO][5739] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.104 [INFO][5747] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.105 [INFO][5747] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.105 [INFO][5747] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.119 [WARNING][5747] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.119 [INFO][5747] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" HandleID="k8s-pod-network.b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Workload="ci--4081--3--7--n--d098215774-k8s-calico--apiserver--57fccbd9fd--fbwcj-eth0" Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.121 [INFO][5747] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:16:33.127973 containerd[1482]: 2026-04-28 00:16:33.125 [INFO][5739] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227" Apr 28 00:16:33.128427 containerd[1482]: time="2026-04-28T00:16:33.128021817Z" level=info msg="TearDown network for sandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" successfully" Apr 28 00:16:33.133721 containerd[1482]: time="2026-04-28T00:16:33.133529509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:16:33.133721 containerd[1482]: time="2026-04-28T00:16:33.133617957Z" level=info msg="RemovePodSandbox \"b5abc571dd7aeb55735bae7799760d6c11864f72860fb1d8d6c8290fda600227\" returns successfully" Apr 28 00:16:38.773899 kubelet[2593]: I0428 00:16:38.773700 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:16:38.800380 kubelet[2593]: I0428 00:16:38.799486 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-6b4b7f4496-57sjc" podStartSLOduration=43.35929841 podStartE2EDuration="49.799469239s" podCreationTimestamp="2026-04-28 00:15:49 +0000 UTC" firstStartedPulling="2026-04-28 00:16:21.546947728 +0000 UTC m=+49.931739533" lastFinishedPulling="2026-04-28 00:16:27.987118557 +0000 UTC m=+56.371910362" observedRunningTime="2026-04-28 00:16:28.234988952 +0000 UTC m=+56.619780757" watchObservedRunningTime="2026-04-28 00:16:38.799469239 +0000 UTC m=+67.184261044" Apr 28 00:16:38.898325 systemd[1]: sshd@0-128.140.91.51:22-203.34.56.186:39188.service: Deactivated successfully. Apr 28 00:16:59.728165 systemd[1]: Started sshd@11-128.140.91.51:22-45.142.193.135:33492.service - OpenSSH per-connection server daemon (45.142.193.135:33492). Apr 28 00:16:59.814319 sshd[5862]: Invalid user admin from 45.142.193.135 port 33492 Apr 28 00:16:59.827673 sshd[5862]: Connection closed by invalid user admin 45.142.193.135 port 33492 [preauth] Apr 28 00:16:59.831095 systemd[1]: sshd@11-128.140.91.51:22-45.142.193.135:33492.service: Deactivated successfully. Apr 28 00:17:09.045057 systemd[1]: run-containerd-runc-k8s.io-6c38f8ea4254cd7d0880125e32a68dbeed8e58062aa02f754fdccdd8f2a761c4-runc.5J99vc.mount: Deactivated successfully. Apr 28 00:17:35.256170 systemd[1]: Started sshd@12-128.140.91.51:22-223.233.87.33:12558.service - OpenSSH per-connection server daemon (223.233.87.33:12558). Apr 28 00:17:36.092770 sshd[5952]: Received disconnect from 223.233.87.33 port 12558:11: Bye Bye [preauth] Apr 28 00:17:36.092770 sshd[5952]: Disconnected from authenticating user root 223.233.87.33 port 12558 [preauth] Apr 28 00:17:36.096111 systemd[1]: sshd@12-128.140.91.51:22-223.233.87.33:12558.service: Deactivated successfully. Apr 28 00:17:56.153322 systemd[1]: Started sshd@13-128.140.91.51:22-50.85.169.122:39992.service - OpenSSH per-connection server daemon (50.85.169.122:39992). Apr 28 00:17:56.275511 sshd[6053]: Accepted publickey for core from 50.85.169.122 port 39992 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:17:56.278631 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:17:56.283394 systemd-logind[1460]: New session 8 of user core. Apr 28 00:17:56.294187 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 00:17:56.495513 sshd[6053]: pam_unix(sshd:session): session closed for user core Apr 28 00:17:56.502373 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Apr 28 00:17:56.503825 systemd[1]: sshd@13-128.140.91.51:22-50.85.169.122:39992.service: Deactivated successfully. Apr 28 00:17:56.507533 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 00:17:56.512354 systemd-logind[1460]: Removed session 8. Apr 28 00:18:01.533563 systemd[1]: Started sshd@14-128.140.91.51:22-50.85.169.122:50734.service - OpenSSH per-connection server daemon (50.85.169.122:50734). Apr 28 00:18:01.656807 sshd[6088]: Accepted publickey for core from 50.85.169.122 port 50734 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:01.659581 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:01.665470 systemd-logind[1460]: New session 9 of user core. Apr 28 00:18:01.674269 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 00:18:01.872707 sshd[6088]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:01.879688 systemd[1]: sshd@14-128.140.91.51:22-50.85.169.122:50734.service: Deactivated successfully. Apr 28 00:18:01.886474 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 00:18:01.887844 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Apr 28 00:18:01.889059 systemd-logind[1460]: Removed session 9. Apr 28 00:18:06.906257 systemd[1]: Started sshd@15-128.140.91.51:22-50.85.169.122:50740.service - OpenSSH per-connection server daemon (50.85.169.122:50740). Apr 28 00:18:07.034434 sshd[6102]: Accepted publickey for core from 50.85.169.122 port 50740 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:07.037410 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:07.042474 systemd-logind[1460]: New session 10 of user core. Apr 28 00:18:07.048183 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 00:18:07.220277 sshd[6102]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:07.226360 systemd[1]: sshd@15-128.140.91.51:22-50.85.169.122:50740.service: Deactivated successfully. Apr 28 00:18:07.230548 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 00:18:07.231985 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Apr 28 00:18:07.234538 systemd-logind[1460]: Removed session 10. Apr 28 00:18:12.255342 systemd[1]: Started sshd@16-128.140.91.51:22-50.85.169.122:52148.service - OpenSSH per-connection server daemon (50.85.169.122:52148). Apr 28 00:18:12.386389 sshd[6144]: Accepted publickey for core from 50.85.169.122 port 52148 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:12.388056 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:12.393876 systemd-logind[1460]: New session 11 of user core. Apr 28 00:18:12.398179 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 00:18:12.578798 sshd[6144]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:12.583871 systemd[1]: sshd@16-128.140.91.51:22-50.85.169.122:52148.service: Deactivated successfully. Apr 28 00:18:12.586387 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 00:18:12.589114 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Apr 28 00:18:12.590515 systemd-logind[1460]: Removed session 11. Apr 28 00:18:12.617378 systemd[1]: Started sshd@17-128.140.91.51:22-50.85.169.122:52150.service - OpenSSH per-connection server daemon (50.85.169.122:52150). Apr 28 00:18:12.741676 sshd[6158]: Accepted publickey for core from 50.85.169.122 port 52150 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:12.744017 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:12.750706 systemd-logind[1460]: New session 12 of user core. Apr 28 00:18:12.756080 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 00:18:12.969525 sshd[6158]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:12.975693 systemd[1]: sshd@17-128.140.91.51:22-50.85.169.122:52150.service: Deactivated successfully. Apr 28 00:18:12.980132 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 00:18:12.983208 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Apr 28 00:18:13.004276 systemd[1]: Started sshd@18-128.140.91.51:22-50.85.169.122:52162.service - OpenSSH per-connection server daemon (50.85.169.122:52162). Apr 28 00:18:13.005269 systemd-logind[1460]: Removed session 12. Apr 28 00:18:13.119234 sshd[6169]: Accepted publickey for core from 50.85.169.122 port 52162 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:13.121678 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:13.132127 systemd-logind[1460]: New session 13 of user core. Apr 28 00:18:13.136110 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 00:18:13.306525 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:13.313231 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Apr 28 00:18:13.313648 systemd[1]: sshd@18-128.140.91.51:22-50.85.169.122:52162.service: Deactivated successfully. Apr 28 00:18:13.316752 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 00:18:13.317956 systemd-logind[1460]: Removed session 13. Apr 28 00:18:18.345284 systemd[1]: Started sshd@19-128.140.91.51:22-50.85.169.122:52168.service - OpenSSH per-connection server daemon (50.85.169.122:52168). Apr 28 00:18:18.474709 sshd[6181]: Accepted publickey for core from 50.85.169.122 port 52168 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:18.476256 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:18.482944 systemd-logind[1460]: New session 14 of user core. Apr 28 00:18:18.489196 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 00:18:18.666835 sshd[6181]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:18.672981 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Apr 28 00:18:18.673434 systemd[1]: sshd@19-128.140.91.51:22-50.85.169.122:52168.service: Deactivated successfully. Apr 28 00:18:18.675780 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 00:18:18.677432 systemd-logind[1460]: Removed session 14. Apr 28 00:18:18.695814 systemd[1]: Started sshd@20-128.140.91.51:22-50.85.169.122:52170.service - OpenSSH per-connection server daemon (50.85.169.122:52170). Apr 28 00:18:18.812727 sshd[6194]: Accepted publickey for core from 50.85.169.122 port 52170 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:18.814531 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:18.820112 systemd-logind[1460]: New session 15 of user core. Apr 28 00:18:18.830190 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 00:18:19.188756 sshd[6194]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:19.195118 systemd[1]: sshd@20-128.140.91.51:22-50.85.169.122:52170.service: Deactivated successfully. Apr 28 00:18:19.199109 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 00:18:19.200756 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Apr 28 00:18:19.201771 systemd-logind[1460]: Removed session 15. Apr 28 00:18:19.224207 systemd[1]: Started sshd@21-128.140.91.51:22-50.85.169.122:52186.service - OpenSSH per-connection server daemon (50.85.169.122:52186). Apr 28 00:18:19.357318 sshd[6205]: Accepted publickey for core from 50.85.169.122 port 52186 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:19.359711 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:19.365123 systemd-logind[1460]: New session 16 of user core. Apr 28 00:18:19.370065 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 00:18:20.024483 sshd[6205]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:20.032431 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Apr 28 00:18:20.032665 systemd[1]: sshd@21-128.140.91.51:22-50.85.169.122:52186.service: Deactivated successfully. Apr 28 00:18:20.037059 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 00:18:20.053717 systemd-logind[1460]: Removed session 16. Apr 28 00:18:20.059462 systemd[1]: Started sshd@22-128.140.91.51:22-50.85.169.122:36502.service - OpenSSH per-connection server daemon (50.85.169.122:36502). Apr 28 00:18:20.195699 sshd[6221]: Accepted publickey for core from 50.85.169.122 port 36502 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:20.197069 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:20.201791 systemd-logind[1460]: New session 17 of user core. Apr 28 00:18:20.207071 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 00:18:20.520703 sshd[6221]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:20.525580 systemd[1]: sshd@22-128.140.91.51:22-50.85.169.122:36502.service: Deactivated successfully. Apr 28 00:18:20.531239 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 00:18:20.536457 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Apr 28 00:18:20.552383 systemd[1]: Started sshd@23-128.140.91.51:22-50.85.169.122:36518.service - OpenSSH per-connection server daemon (50.85.169.122:36518). Apr 28 00:18:20.556080 systemd-logind[1460]: Removed session 17. Apr 28 00:18:20.675926 sshd[6231]: Accepted publickey for core from 50.85.169.122 port 36518 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:20.678721 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:20.686244 systemd-logind[1460]: New session 18 of user core. Apr 28 00:18:20.696141 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 00:18:20.876409 sshd[6231]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:20.882687 systemd[1]: sshd@23-128.140.91.51:22-50.85.169.122:36518.service: Deactivated successfully. Apr 28 00:18:20.886476 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 00:18:20.887460 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Apr 28 00:18:20.889143 systemd-logind[1460]: Removed session 18. Apr 28 00:18:25.922189 systemd[1]: Started sshd@24-128.140.91.51:22-50.85.169.122:36532.service - OpenSSH per-connection server daemon (50.85.169.122:36532). Apr 28 00:18:26.046408 sshd[6287]: Accepted publickey for core from 50.85.169.122 port 36532 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:26.050269 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:26.057703 systemd-logind[1460]: New session 19 of user core. Apr 28 00:18:26.068174 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 00:18:26.235122 sshd[6287]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:26.241202 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Apr 28 00:18:26.241685 systemd[1]: sshd@24-128.140.91.51:22-50.85.169.122:36532.service: Deactivated successfully. Apr 28 00:18:26.246788 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 00:18:26.248568 systemd-logind[1460]: Removed session 19. Apr 28 00:18:31.272226 systemd[1]: Started sshd@25-128.140.91.51:22-50.85.169.122:39812.service - OpenSSH per-connection server daemon (50.85.169.122:39812). Apr 28 00:18:31.394529 sshd[6321]: Accepted publickey for core from 50.85.169.122 port 39812 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:18:31.397267 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:18:31.403480 systemd-logind[1460]: New session 20 of user core. Apr 28 00:18:31.409080 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 00:18:31.582918 sshd[6321]: pam_unix(sshd:session): session closed for user core Apr 28 00:18:31.589207 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Apr 28 00:18:31.589516 systemd[1]: sshd@25-128.140.91.51:22-50.85.169.122:39812.service: Deactivated successfully. Apr 28 00:18:31.593098 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 00:18:31.594829 systemd-logind[1460]: Removed session 20. Apr 28 00:18:33.528313 systemd[1]: Started sshd@26-128.140.91.51:22-223.233.87.33:29886.service - OpenSSH per-connection server daemon (223.233.87.33:29886). Apr 28 00:18:34.442548 sshd[6336]: Received disconnect from 223.233.87.33 port 29886:11: Bye Bye [preauth] Apr 28 00:18:34.442548 sshd[6336]: Disconnected from authenticating user root 223.233.87.33 port 29886 [preauth] Apr 28 00:18:34.444734 systemd[1]: sshd@26-128.140.91.51:22-223.233.87.33:29886.service: Deactivated successfully. Apr 28 00:18:59.237989 systemd[1]: run-containerd-runc-k8s.io-48b7bea159d22591e3b94dd3764feb171c24f4465628bd85db3e90789c28d2cc-runc.853Ll8.mount: Deactivated successfully. Apr 28 00:19:03.219376 systemd[1]: cri-containerd-ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000.scope: Deactivated successfully. Apr 28 00:19:03.219657 systemd[1]: cri-containerd-ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000.scope: Consumed 14.252s CPU time. Apr 28 00:19:03.247065 containerd[1482]: time="2026-04-28T00:19:03.246938926Z" level=info msg="shim disconnected" id=ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000 namespace=k8s.io Apr 28 00:19:03.247065 containerd[1482]: time="2026-04-28T00:19:03.247013765Z" level=warning msg="cleaning up after shim disconnected" id=ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000 namespace=k8s.io Apr 28 00:19:03.247065 containerd[1482]: time="2026-04-28T00:19:03.247023005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:19:03.249043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000-rootfs.mount: Deactivated successfully. Apr 28 00:19:03.380060 kubelet[2593]: E0428 00:19:03.379954 2593 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43166->10.0.0.2:2379: read: connection timed out" Apr 28 00:19:03.639622 systemd[1]: cri-containerd-0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6.scope: Deactivated successfully. Apr 28 00:19:03.640824 systemd[1]: cri-containerd-0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6.scope: Consumed 4.135s CPU time, 17.8M memory peak, 0B memory swap peak. Apr 28 00:19:03.666673 containerd[1482]: time="2026-04-28T00:19:03.666591749Z" level=info msg="shim disconnected" id=0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6 namespace=k8s.io Apr 28 00:19:03.666673 containerd[1482]: time="2026-04-28T00:19:03.666661028Z" level=warning msg="cleaning up after shim disconnected" id=0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6 namespace=k8s.io Apr 28 00:19:03.666673 containerd[1482]: time="2026-04-28T00:19:03.666673548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:19:03.668518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6-rootfs.mount: Deactivated successfully. Apr 28 00:19:03.739050 kubelet[2593]: I0428 00:19:03.738978 2593 scope.go:117] "RemoveContainer" containerID="ec95b1c093e825622dbd5f0ad56084ebf35759b95c8a990ba63d5a29e5f4b000" Apr 28 00:19:03.739982 kubelet[2593]: I0428 00:19:03.739477 2593 scope.go:117] "RemoveContainer" containerID="0a6d2fde72551ee50ff0be8aa8272b6eaf2a8e4f3d7107d51d7cb9d4c81a8df6" Apr 28 00:19:03.743133 containerd[1482]: time="2026-04-28T00:19:03.743088840Z" level=info msg="CreateContainer within sandbox \"cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 28 00:19:03.743473 containerd[1482]: time="2026-04-28T00:19:03.743327676Z" level=info msg="CreateContainer within sandbox \"2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 00:19:03.767292 containerd[1482]: time="2026-04-28T00:19:03.767239017Z" level=info msg="CreateContainer within sandbox \"cc64ae7e95c28b766c2030b9c594b29f9b3a61afda6d7882bd8cf35fc00ec7c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fad891bb08f904aa33029726b279dce1cb06546d9937f48264884f3c94e42e50\"" Apr 28 00:19:03.769283 containerd[1482]: time="2026-04-28T00:19:03.768774747Z" level=info msg="StartContainer for \"fad891bb08f904aa33029726b279dce1cb06546d9937f48264884f3c94e42e50\"" Apr 28 00:19:03.773371 containerd[1482]: time="2026-04-28T00:19:03.773316180Z" level=info msg="CreateContainer within sandbox \"2e838160af6857ea93aad67ff7af815ee9eb2d0ffd073b15e449b1cb6b38d358\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"338684b6658e4c54ee51498877379405dc8bdc4c64c866336ca0befc4075408f\"" Apr 28 00:19:03.777087 containerd[1482]: time="2026-04-28T00:19:03.777040189Z" level=info msg="StartContainer for \"338684b6658e4c54ee51498877379405dc8bdc4c64c866336ca0befc4075408f\"" Apr 28 00:19:03.806274 systemd[1]: Started cri-containerd-fad891bb08f904aa33029726b279dce1cb06546d9937f48264884f3c94e42e50.scope - libcontainer container fad891bb08f904aa33029726b279dce1cb06546d9937f48264884f3c94e42e50. Apr 28 00:19:03.819061 systemd[1]: Started cri-containerd-338684b6658e4c54ee51498877379405dc8bdc4c64c866336ca0befc4075408f.scope - libcontainer container 338684b6658e4c54ee51498877379405dc8bdc4c64c866336ca0befc4075408f. Apr 28 00:19:03.849053 containerd[1482]: time="2026-04-28T00:19:03.848983887Z" level=info msg="StartContainer for \"fad891bb08f904aa33029726b279dce1cb06546d9937f48264884f3c94e42e50\" returns successfully" Apr 28 00:19:03.868591 containerd[1482]: time="2026-04-28T00:19:03.868542992Z" level=info msg="StartContainer for \"338684b6658e4c54ee51498877379405dc8bdc4c64c866336ca0befc4075408f\" returns successfully"