Apr 28 00:12:09.899774 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 28 00:12:09.899810 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 27 22:49:05 -00 2026 Apr 28 00:12:09.899869 kernel: KASLR enabled Apr 28 00:12:09.899876 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 28 00:12:09.899882 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 28 00:12:09.899888 kernel: random: crng init done Apr 28 00:12:09.899895 kernel: ACPI: Early table checksum verification disabled Apr 28 00:12:09.899901 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 28 00:12:09.899908 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 28 00:12:09.899916 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899923 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899929 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899935 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899941 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899949 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899957 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899963 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899970 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:12:09.899977 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 00:12:09.899983 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 28 00:12:09.899990 kernel: NUMA: Failed to initialise from firmware Apr 28 00:12:09.899997 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 28 00:12:09.900003 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 28 00:12:09.900009 kernel: Zone ranges: Apr 28 00:12:09.900016 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 28 00:12:09.900024 kernel: DMA32 empty Apr 28 00:12:09.900030 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 28 00:12:09.900037 kernel: Movable zone start for each node Apr 28 00:12:09.900043 kernel: Early memory node ranges Apr 28 00:12:09.900050 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 28 00:12:09.900058 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 28 00:12:09.900064 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 28 00:12:09.900071 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 28 00:12:09.900078 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 28 00:12:09.900084 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 28 00:12:09.900091 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 28 00:12:09.900098 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 28 00:12:09.900107 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 28 00:12:09.900113 kernel: psci: probing for conduit method from ACPI. Apr 28 00:12:09.900120 kernel: psci: PSCIv1.1 detected in firmware. Apr 28 00:12:09.900130 kernel: psci: Using standard PSCI v0.2 function IDs Apr 28 00:12:09.900136 kernel: psci: Trusted OS migration not required Apr 28 00:12:09.900143 kernel: psci: SMC Calling Convention v1.1 Apr 28 00:12:09.900152 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 28 00:12:09.900159 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 28 00:12:09.900166 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 28 00:12:09.900173 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 28 00:12:09.900179 kernel: Detected PIPT I-cache on CPU0 Apr 28 00:12:09.900186 kernel: CPU features: detected: GIC system register CPU interface Apr 28 00:12:09.900193 kernel: CPU features: detected: Hardware dirty bit management Apr 28 00:12:09.900200 kernel: CPU features: detected: Spectre-v4 Apr 28 00:12:09.900207 kernel: CPU features: detected: Spectre-BHB Apr 28 00:12:09.900213 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 28 00:12:09.900222 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 28 00:12:09.900229 kernel: CPU features: detected: ARM erratum 1418040 Apr 28 00:12:09.900236 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 28 00:12:09.900243 kernel: alternatives: applying boot alternatives Apr 28 00:12:09.900251 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fbd74e24c605bcd6049a4229047ecffba5884416be782935a76f3959939199f Apr 28 00:12:09.900259 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:12:09.900266 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:12:09.900273 kernel: Fallback order for Node 0: 0 Apr 28 00:12:09.900280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 28 00:12:09.900287 kernel: Policy zone: Normal Apr 28 00:12:09.900294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:12:09.900303 kernel: software IO TLB: area num 2. Apr 28 00:12:09.900310 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 28 00:12:09.900317 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 28 00:12:09.900324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 28 00:12:09.900331 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:12:09.900338 kernel: rcu: RCU event tracing is enabled. Apr 28 00:12:09.900345 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 28 00:12:09.900353 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:12:09.900360 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:12:09.900367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:12:09.900374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 28 00:12:09.900381 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 28 00:12:09.900390 kernel: GICv3: 256 SPIs implemented Apr 28 00:12:09.900397 kernel: GICv3: 0 Extended SPIs implemented Apr 28 00:12:09.900404 kernel: Root IRQ handler: gic_handle_irq Apr 28 00:12:09.900411 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 28 00:12:09.900418 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 28 00:12:09.900425 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 28 00:12:09.900432 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 28 00:12:09.900439 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 28 00:12:09.900446 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 28 00:12:09.900454 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 28 00:12:09.900460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:12:09.900469 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 28 00:12:09.900476 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 28 00:12:09.900483 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 28 00:12:09.900490 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 28 00:12:09.900497 kernel: Console: colour dummy device 80x25 Apr 28 00:12:09.900504 kernel: ACPI: Core revision 20230628 Apr 28 00:12:09.900511 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 28 00:12:09.900518 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:12:09.900526 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:12:09.900533 kernel: landlock: Up and running. Apr 28 00:12:09.900541 kernel: SELinux: Initializing. Apr 28 00:12:09.900548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:12:09.900555 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:12:09.900563 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 28 00:12:09.900570 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 28 00:12:09.900577 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:12:09.900584 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:12:09.900592 kernel: Platform MSI: ITS@0x8080000 domain created Apr 28 00:12:09.900599 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 28 00:12:09.900608 kernel: Remapping and enabling EFI services. Apr 28 00:12:09.900616 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:12:09.900623 kernel: Detected PIPT I-cache on CPU1 Apr 28 00:12:09.900631 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 28 00:12:09.900638 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 28 00:12:09.900645 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 28 00:12:09.900653 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 28 00:12:09.900674 kernel: smp: Brought up 1 node, 2 CPUs Apr 28 00:12:09.900682 kernel: SMP: Total of 2 processors activated. Apr 28 00:12:09.900690 kernel: CPU features: detected: 32-bit EL0 Support Apr 28 00:12:09.900700 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 28 00:12:09.900708 kernel: CPU features: detected: Common not Private translations Apr 28 00:12:09.900721 kernel: CPU features: detected: CRC32 instructions Apr 28 00:12:09.900731 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 28 00:12:09.900738 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 28 00:12:09.900748 kernel: CPU features: detected: LSE atomic instructions Apr 28 00:12:09.900757 kernel: CPU features: detected: Privileged Access Never Apr 28 00:12:09.900766 kernel: CPU features: detected: RAS Extension Support Apr 28 00:12:09.900776 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 28 00:12:09.900784 kernel: CPU: All CPU(s) started at EL1 Apr 28 00:12:09.900791 kernel: alternatives: applying system-wide alternatives Apr 28 00:12:09.900799 kernel: devtmpfs: initialized Apr 28 00:12:09.900807 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:12:09.900814 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 28 00:12:09.900831 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:12:09.900838 kernel: SMBIOS 3.0.0 present. Apr 28 00:12:09.900849 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 28 00:12:09.900856 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:12:09.900864 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 28 00:12:09.900871 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 28 00:12:09.900879 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 28 00:12:09.900887 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:12:09.900895 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Apr 28 00:12:09.900902 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:12:09.900910 kernel: cpuidle: using governor menu Apr 28 00:12:09.900919 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 28 00:12:09.900927 kernel: ASID allocator initialised with 32768 entries Apr 28 00:12:09.900934 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:12:09.900941 kernel: Serial: AMBA PL011 UART driver Apr 28 00:12:09.900949 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 28 00:12:09.900956 kernel: Modules: 0 pages in range for non-PLT usage Apr 28 00:12:09.900963 kernel: Modules: 509008 pages in range for PLT usage Apr 28 00:12:09.900971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:12:09.900978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:12:09.900988 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 28 00:12:09.900995 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 28 00:12:09.901003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:12:09.901010 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:12:09.901018 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 28 00:12:09.901025 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 28 00:12:09.901033 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:12:09.901040 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:12:09.901047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:12:09.901055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:12:09.901064 kernel: ACPI: Interpreter enabled Apr 28 00:12:09.901072 kernel: ACPI: Using GIC for interrupt routing Apr 28 00:12:09.901079 kernel: ACPI: MCFG table detected, 1 entries Apr 28 00:12:09.901087 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 28 00:12:09.901094 kernel: printk: console [ttyAMA0] enabled Apr 28 00:12:09.901102 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:12:09.901293 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:12:09.901376 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 28 00:12:09.901443 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 28 00:12:09.901524 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 28 00:12:09.901591 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 28 00:12:09.901601 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 28 00:12:09.901608 kernel: PCI host bridge to bus 0000:00 Apr 28 00:12:09.902807 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 28 00:12:09.902978 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 28 00:12:09.903057 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 28 00:12:09.903118 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:12:09.903215 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 28 00:12:09.903294 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 28 00:12:09.903363 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 28 00:12:09.903432 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 28 00:12:09.903513 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.903582 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 28 00:12:09.903670 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.903755 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 28 00:12:09.903849 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.903924 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 28 00:12:09.904007 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.904075 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 28 00:12:09.904150 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.904219 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 28 00:12:09.904294 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.904361 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 28 00:12:09.904439 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.904507 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 28 00:12:09.904596 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.907904 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 28 00:12:09.908061 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 28 00:12:09.908132 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 28 00:12:09.908208 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 28 00:12:09.908288 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 28 00:12:09.908377 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 28 00:12:09.908450 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 28 00:12:09.908519 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 28 00:12:09.908589 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 28 00:12:09.908687 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 28 00:12:09.909815 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 28 00:12:09.909935 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 28 00:12:09.910010 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 28 00:12:09.910080 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 28 00:12:09.910159 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 28 00:12:09.910228 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 28 00:12:09.913269 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 28 00:12:09.913410 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 28 00:12:09.913531 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 28 00:12:09.913694 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 28 00:12:09.913781 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 28 00:12:09.913924 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 28 00:12:09.914020 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 28 00:12:09.914094 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 28 00:12:09.914162 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 28 00:12:09.914232 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 28 00:12:09.914307 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 28 00:12:09.914373 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 28 00:12:09.914439 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 28 00:12:09.914514 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 28 00:12:09.914583 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 28 00:12:09.914649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 28 00:12:09.916912 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 28 00:12:09.917008 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 28 00:12:09.917079 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 28 00:12:09.917152 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 28 00:12:09.917221 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 28 00:12:09.917298 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 28 00:12:09.917371 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 28 00:12:09.917440 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 28 00:12:09.917520 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 28 00:12:09.917595 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 28 00:12:09.917679 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 28 00:12:09.917751 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 28 00:12:09.917845 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 28 00:12:09.917921 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 28 00:12:09.917988 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 28 00:12:09.918060 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 28 00:12:09.918126 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 28 00:12:09.918194 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 28 00:12:09.918266 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 28 00:12:09.918332 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 28 00:12:09.918403 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 28 00:12:09.918473 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 28 00:12:09.918541 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:12:09.918612 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 28 00:12:09.921251 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:12:09.921355 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 28 00:12:09.921426 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:12:09.921524 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 28 00:12:09.921598 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:12:09.921698 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 28 00:12:09.921771 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:12:09.921862 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 28 00:12:09.921936 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:12:09.922032 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 28 00:12:09.922111 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:12:09.922193 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 28 00:12:09.922260 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:12:09.922433 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 28 00:12:09.922517 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:12:09.922593 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 28 00:12:09.922684 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 28 00:12:09.922763 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 28 00:12:09.922848 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 28 00:12:09.922922 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 28 00:12:09.922989 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 28 00:12:09.923055 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 28 00:12:09.923120 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 28 00:12:09.923188 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 28 00:12:09.923263 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 28 00:12:09.923332 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 28 00:12:09.923418 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 28 00:12:09.923504 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 28 00:12:09.923575 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 28 00:12:09.923646 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 28 00:12:09.923890 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 28 00:12:09.923970 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 28 00:12:09.924044 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 28 00:12:09.924111 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 28 00:12:09.924177 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 28 00:12:09.924248 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 28 00:12:09.924323 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 28 00:12:09.924391 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 28 00:12:09.924458 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 28 00:12:09.924526 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 28 00:12:09.924595 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 28 00:12:09.924766 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 28 00:12:09.924874 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:12:09.924955 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 28 00:12:09.925031 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 28 00:12:09.925096 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 28 00:12:09.925160 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 28 00:12:09.925226 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:12:09.925299 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 28 00:12:09.925366 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 28 00:12:09.925432 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 28 00:12:09.925504 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 28 00:12:09.925572 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 28 00:12:09.925639 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:12:09.925736 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 28 00:12:09.925808 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 28 00:12:09.925892 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 28 00:12:09.925958 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 28 00:12:09.926022 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:12:09.926098 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 28 00:12:09.926174 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 28 00:12:09.926242 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 28 00:12:09.926311 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 28 00:12:09.927852 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 28 00:12:09.927943 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:12:09.928019 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 28 00:12:09.928088 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 28 00:12:09.928158 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 28 00:12:09.928231 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 28 00:12:09.928299 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 28 00:12:09.928366 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:12:09.928444 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 28 00:12:09.928524 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 28 00:12:09.928598 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 28 00:12:09.928682 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 28 00:12:09.928753 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 28 00:12:09.928836 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 28 00:12:09.928907 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:12:09.928978 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 28 00:12:09.930778 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 28 00:12:09.930926 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 28 00:12:09.930997 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:12:09.931072 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 28 00:12:09.931139 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 28 00:12:09.931214 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 28 00:12:09.931280 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:12:09.931351 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 28 00:12:09.931411 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 28 00:12:09.931471 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 28 00:12:09.931547 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 28 00:12:09.931610 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 28 00:12:09.932646 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 28 00:12:09.932777 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 28 00:12:09.932900 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 28 00:12:09.932965 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 28 00:12:09.933036 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 28 00:12:09.933097 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 28 00:12:09.933157 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 28 00:12:09.933238 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 28 00:12:09.933300 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 28 00:12:09.933378 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 28 00:12:09.933456 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 28 00:12:09.933525 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 28 00:12:09.933587 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 28 00:12:09.933685 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 28 00:12:09.933753 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 28 00:12:09.933831 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 28 00:12:09.933916 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 28 00:12:09.933987 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 28 00:12:09.934049 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 28 00:12:09.934119 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 28 00:12:09.934181 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 28 00:12:09.934242 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 28 00:12:09.934312 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 28 00:12:09.934374 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 28 00:12:09.934448 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 28 00:12:09.934459 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 28 00:12:09.934467 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 28 00:12:09.934476 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 28 00:12:09.934484 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 28 00:12:09.934492 kernel: iommu: Default domain type: Translated Apr 28 00:12:09.934500 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 28 00:12:09.934508 kernel: efivars: Registered efivars operations Apr 28 00:12:09.934516 kernel: vgaarb: loaded Apr 28 00:12:09.934526 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 28 00:12:09.934534 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:12:09.934542 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:12:09.934550 kernel: pnp: PnP ACPI init Apr 28 00:12:09.934634 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 28 00:12:09.934646 kernel: pnp: PnP ACPI: found 1 devices Apr 28 00:12:09.934654 kernel: NET: Registered PF_INET protocol family Apr 28 00:12:09.937596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:12:09.937617 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:12:09.937627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:12:09.937637 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:12:09.937646 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:12:09.937654 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:12:09.937675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:12:09.937697 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:12:09.937706 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:12:09.937906 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 28 00:12:09.937928 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:12:09.937936 kernel: kvm [1]: HYP mode not available Apr 28 00:12:09.937944 kernel: Initialise system trusted keyrings Apr 28 00:12:09.937952 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:12:09.937960 kernel: Key type asymmetric registered Apr 28 00:12:09.937968 kernel: Asymmetric key parser 'x509' registered Apr 28 00:12:09.937976 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 28 00:12:09.937984 kernel: io scheduler mq-deadline registered Apr 28 00:12:09.937993 kernel: io scheduler kyber registered Apr 28 00:12:09.938003 kernel: io scheduler bfq registered Apr 28 00:12:09.938012 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 28 00:12:09.938096 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 28 00:12:09.938166 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 28 00:12:09.938242 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.938320 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 28 00:12:09.938397 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 28 00:12:09.938469 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.938546 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 28 00:12:09.938631 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 28 00:12:09.938891 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.938972 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 28 00:12:09.939041 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 28 00:12:09.939113 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.939182 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 28 00:12:09.939249 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 28 00:12:09.939314 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.939384 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 28 00:12:09.939451 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 28 00:12:09.939520 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.939592 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 28 00:12:09.940387 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 28 00:12:09.940526 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.940603 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 28 00:12:09.940713 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 28 00:12:09.940797 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.940809 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 28 00:12:09.940941 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 28 00:12:09.941014 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 28 00:12:09.941082 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 28 00:12:09.941092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 28 00:12:09.941105 kernel: ACPI: button: Power Button [PWRB] Apr 28 00:12:09.941113 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 28 00:12:09.941191 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 28 00:12:09.941272 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 28 00:12:09.941283 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:12:09.941291 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 28 00:12:09.941361 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 28 00:12:09.941372 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 28 00:12:09.941380 kernel: thunder_xcv, ver 1.0 Apr 28 00:12:09.941390 kernel: thunder_bgx, ver 1.0 Apr 28 00:12:09.941398 kernel: nicpf, ver 1.0 Apr 28 00:12:09.941406 kernel: nicvf, ver 1.0 Apr 28 00:12:09.941500 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 28 00:12:09.941567 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-28T00:12:09 UTC (1777335129) Apr 28 00:12:09.941578 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 28 00:12:09.941587 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 28 00:12:09.941595 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 28 00:12:09.941606 kernel: watchdog: Hard watchdog permanently disabled Apr 28 00:12:09.941614 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:12:09.941624 kernel: Segment Routing with IPv6 Apr 28 00:12:09.941632 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:12:09.941640 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:12:09.941648 kernel: Key type dns_resolver registered Apr 28 00:12:09.941656 kernel: registered taskstats version 1 Apr 28 00:12:09.941678 kernel: Loading compiled-in X.509 certificates Apr 28 00:12:09.941687 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6c96a5ff031ece119b3ff0073294cdad6eea39a2' Apr 28 00:12:09.941697 kernel: Key type .fscrypt registered Apr 28 00:12:09.941705 kernel: Key type fscrypt-provisioning registered Apr 28 00:12:09.941713 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:12:09.941721 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:12:09.941729 kernel: ima: No architecture policies found Apr 28 00:12:09.941737 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 28 00:12:09.941745 kernel: clk: Disabling unused clocks Apr 28 00:12:09.941753 kernel: Freeing unused kernel memory: 39424K Apr 28 00:12:09.941761 kernel: Run /init as init process Apr 28 00:12:09.941769 kernel: with arguments: Apr 28 00:12:09.941779 kernel: /init Apr 28 00:12:09.941786 kernel: with environment: Apr 28 00:12:09.941794 kernel: HOME=/ Apr 28 00:12:09.941802 kernel: TERM=linux Apr 28 00:12:09.941812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:12:09.941833 systemd[1]: Detected virtualization kvm. Apr 28 00:12:09.941841 systemd[1]: Detected architecture arm64. Apr 28 00:12:09.941852 systemd[1]: Running in initrd. Apr 28 00:12:09.941860 systemd[1]: No hostname configured, using default hostname. Apr 28 00:12:09.941869 systemd[1]: Hostname set to . Apr 28 00:12:09.941877 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:12:09.941886 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:12:09.941894 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:12:09.941903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:12:09.941912 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:12:09.941923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:12:09.941933 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:12:09.941942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:12:09.941952 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:12:09.941960 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:12:09.941969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:12:09.941978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:12:09.941989 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:12:09.941998 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:12:09.942006 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:12:09.942015 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:12:09.942023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:12:09.942032 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:12:09.942041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:12:09.942049 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:12:09.942058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:12:09.942068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:12:09.942077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:12:09.942085 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:12:09.942094 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:12:09.942102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:12:09.942111 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:12:09.942120 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:12:09.942128 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:12:09.942138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:12:09.942147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:12:09.942155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:12:09.942166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:12:09.942174 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:12:09.942211 systemd-journald[237]: Collecting audit messages is disabled. Apr 28 00:12:09.942236 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:12:09.942245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:09.942253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:12:09.942263 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:12:09.942272 kernel: Bridge firewalling registered Apr 28 00:12:09.942280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:12:09.942288 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:12:09.942297 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:12:09.942307 systemd-journald[237]: Journal started Apr 28 00:12:09.942328 systemd-journald[237]: Runtime Journal (/run/log/journal/01edfbbc3cfd4a92b295fae22270f9b4) is 8.0M, max 76.6M, 68.6M free. Apr 28 00:12:09.900723 systemd-modules-load[238]: Inserted module 'overlay' Apr 28 00:12:09.927485 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 28 00:12:09.951692 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:12:09.958015 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:12:09.961633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:12:09.969895 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:12:09.971904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:12:09.975868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:12:09.993760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:12:09.998389 dracut-cmdline[267]: dracut-dracut-053 Apr 28 00:12:10.000601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:12:10.003572 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fbd74e24c605bcd6049a4229047ecffba5884416be782935a76f3959939199f Apr 28 00:12:10.013023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:12:10.038576 systemd-resolved[285]: Positive Trust Anchors: Apr 28 00:12:10.038594 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:12:10.038626 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:12:10.049566 systemd-resolved[285]: Defaulting to hostname 'linux'. Apr 28 00:12:10.050756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:12:10.051506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:12:10.083750 kernel: SCSI subsystem initialized Apr 28 00:12:10.088717 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:12:10.096729 kernel: iscsi: registered transport (tcp) Apr 28 00:12:10.110753 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:12:10.110882 kernel: QLogic iSCSI HBA Driver Apr 28 00:12:10.162364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:12:10.170866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:12:10.192936 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:12:10.193629 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:12:10.193644 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:12:10.245723 kernel: raid6: neonx8 gen() 15561 MB/s Apr 28 00:12:10.262741 kernel: raid6: neonx4 gen() 15572 MB/s Apr 28 00:12:10.279725 kernel: raid6: neonx2 gen() 13148 MB/s Apr 28 00:12:10.296734 kernel: raid6: neonx1 gen() 10407 MB/s Apr 28 00:12:10.313724 kernel: raid6: int64x8 gen() 6916 MB/s Apr 28 00:12:10.330745 kernel: raid6: int64x4 gen() 7311 MB/s Apr 28 00:12:10.347725 kernel: raid6: int64x2 gen() 6101 MB/s Apr 28 00:12:10.364754 kernel: raid6: int64x1 gen() 5040 MB/s Apr 28 00:12:10.364882 kernel: raid6: using algorithm neonx4 gen() 15572 MB/s Apr 28 00:12:10.381762 kernel: raid6: .... xor() 11997 MB/s, rmw enabled Apr 28 00:12:10.381911 kernel: raid6: using neon recovery algorithm Apr 28 00:12:10.386731 kernel: xor: measuring software checksum speed Apr 28 00:12:10.386844 kernel: 8regs : 19769 MB/sec Apr 28 00:12:10.386892 kernel: 32regs : 16929 MB/sec Apr 28 00:12:10.387878 kernel: arm64_neon : 26691 MB/sec Apr 28 00:12:10.387910 kernel: xor: using function: arm64_neon (26691 MB/sec) Apr 28 00:12:10.438771 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:12:10.453132 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:12:10.459988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:12:10.488318 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 28 00:12:10.492899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:12:10.502772 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:12:10.521182 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Apr 28 00:12:10.554711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:12:10.560980 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:12:10.616473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:12:10.625993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:12:10.649110 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:12:10.652202 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:12:10.653968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:12:10.655929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:12:10.661944 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:12:10.681232 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:12:10.713508 kernel: scsi host0: Virtio SCSI HBA Apr 28 00:12:10.714678 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:12:10.715678 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 28 00:12:10.750137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:12:10.750267 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:12:10.751231 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:12:10.754031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:12:10.754194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:10.755268 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:12:10.761897 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 28 00:12:10.763015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:12:10.766899 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 28 00:12:10.767107 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:12:10.770686 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:12:10.785049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:10.789721 kernel: ACPI: bus type USB registered Apr 28 00:12:10.789767 kernel: usbcore: registered new interface driver usbfs Apr 28 00:12:10.794870 kernel: usbcore: registered new interface driver hub Apr 28 00:12:10.794928 kernel: usbcore: registered new device driver usb Apr 28 00:12:10.795931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:12:10.799722 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 28 00:12:10.801704 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 28 00:12:10.804886 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 28 00:12:10.805162 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 28 00:12:10.805264 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 28 00:12:10.808554 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:12:10.808626 kernel: GPT:17805311 != 80003071 Apr 28 00:12:10.808638 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:12:10.808648 kernel: GPT:17805311 != 80003071 Apr 28 00:12:10.808672 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:12:10.808686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:12:10.811692 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 28 00:12:10.834032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:12:10.838769 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 28 00:12:10.839087 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 28 00:12:10.842924 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 28 00:12:10.846021 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 28 00:12:10.846237 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 28 00:12:10.846323 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 28 00:12:10.856697 kernel: hub 1-0:1.0: USB hub found Apr 28 00:12:10.856953 kernel: hub 1-0:1.0: 4 ports detected Apr 28 00:12:10.861685 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 28 00:12:10.865695 kernel: hub 2-0:1.0: USB hub found Apr 28 00:12:10.865938 kernel: hub 2-0:1.0: 4 ports detected Apr 28 00:12:10.866028 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (518) Apr 28 00:12:10.872754 kernel: BTRFS: device fsid 4ceb9780-605b-47f7-8c1f-b3fcb9f87ddc devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (527) Apr 28 00:12:10.885260 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 28 00:12:10.892114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 28 00:12:10.901738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 28 00:12:10.902459 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 28 00:12:10.913494 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 28 00:12:10.919969 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:12:10.933768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:12:10.934822 disk-uuid[576]: Primary Header is updated. Apr 28 00:12:10.934822 disk-uuid[576]: Secondary Entries is updated. Apr 28 00:12:10.934822 disk-uuid[576]: Secondary Header is updated. Apr 28 00:12:11.099973 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 28 00:12:11.234487 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 28 00:12:11.234567 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 28 00:12:11.234914 kernel: usbcore: registered new interface driver usbhid Apr 28 00:12:11.234939 kernel: usbhid: USB HID core driver Apr 28 00:12:11.342860 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 28 00:12:11.471715 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 28 00:12:11.524693 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 28 00:12:11.953814 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 28 00:12:11.955116 disk-uuid[577]: The operation has completed successfully. Apr 28 00:12:12.011371 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:12:12.011514 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:12:12.024048 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:12:12.030788 sh[594]: Success Apr 28 00:12:12.045691 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 28 00:12:12.095338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:12:12.111754 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:12:12.116911 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:12:12.140048 kernel: BTRFS info (device dm-0): first mount of filesystem 4ceb9780-605b-47f7-8c1f-b3fcb9f87ddc Apr 28 00:12:12.140120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:12:12.140147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:12:12.140162 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:12:12.140176 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:12:12.147713 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 28 00:12:12.149531 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:12:12.152272 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:12:12.160996 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:12:12.164605 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:12:12.177611 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:12:12.177691 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:12:12.177704 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:12:12.183733 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:12:12.183843 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:12:12.197829 kernel: BTRFS info (device sda6): last unmount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:12:12.197536 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:12:12.204961 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:12:12.211238 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:12:12.300890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:12:12.308946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:12:12.320508 ignition[690]: Ignition 2.19.0 Apr 28 00:12:12.320518 ignition[690]: Stage: fetch-offline Apr 28 00:12:12.320567 ignition[690]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:12.320576 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:12.322982 ignition[690]: parsed url from cmdline: "" Apr 28 00:12:12.324913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:12:12.322987 ignition[690]: no config URL provided Apr 28 00:12:12.322996 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:12:12.323013 ignition[690]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:12:12.323020 ignition[690]: failed to fetch config: resource requires networking Apr 28 00:12:12.323251 ignition[690]: Ignition finished successfully Apr 28 00:12:12.332762 systemd-networkd[784]: lo: Link UP Apr 28 00:12:12.332776 systemd-networkd[784]: lo: Gained carrier Apr 28 00:12:12.334554 systemd-networkd[784]: Enumeration completed Apr 28 00:12:12.334722 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:12:12.335397 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:12.335401 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:12:12.335501 systemd[1]: Reached target network.target - Network. Apr 28 00:12:12.336566 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:12.336569 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:12:12.337778 systemd-networkd[784]: eth0: Link UP Apr 28 00:12:12.337782 systemd-networkd[784]: eth0: Gained carrier Apr 28 00:12:12.337790 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:12.344373 systemd-networkd[784]: eth1: Link UP Apr 28 00:12:12.344377 systemd-networkd[784]: eth1: Gained carrier Apr 28 00:12:12.344388 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:12.347892 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 28 00:12:12.361722 ignition[787]: Ignition 2.19.0 Apr 28 00:12:12.361734 ignition[787]: Stage: fetch Apr 28 00:12:12.361935 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:12.361946 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:12.362056 ignition[787]: parsed url from cmdline: "" Apr 28 00:12:12.362059 ignition[787]: no config URL provided Apr 28 00:12:12.362064 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:12:12.362071 ignition[787]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:12:12.362100 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 28 00:12:12.362892 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 28 00:12:12.385772 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 28 00:12:12.395835 systemd-networkd[784]: eth0: DHCPv4 address 178.105.25.61/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 28 00:12:12.563108 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 28 00:12:12.568866 ignition[787]: GET result: OK Apr 28 00:12:12.569046 ignition[787]: parsing config with SHA512: 69dad49c61415e6486343afdb6c60699b29460ae497abc877964519393c77fb3bb9ccb74ea640d60e810cf1e79a836eb29929cba3384c82289adb9032b2f65f9 Apr 28 00:12:12.575311 unknown[787]: fetched base config from "system" Apr 28 00:12:12.575323 unknown[787]: fetched base config from "system" Apr 28 00:12:12.576097 ignition[787]: fetch: fetch complete Apr 28 00:12:12.575328 unknown[787]: fetched user config from "hetzner" Apr 28 00:12:12.576104 ignition[787]: fetch: fetch passed Apr 28 00:12:12.577751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 28 00:12:12.576172 ignition[787]: Ignition finished successfully Apr 28 00:12:12.592033 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:12:12.607932 ignition[794]: Ignition 2.19.0 Apr 28 00:12:12.607945 ignition[794]: Stage: kargs Apr 28 00:12:12.608133 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:12.608142 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:12.609107 ignition[794]: kargs: kargs passed Apr 28 00:12:12.609170 ignition[794]: Ignition finished successfully Apr 28 00:12:12.613175 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:12:12.622006 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:12:12.633623 ignition[801]: Ignition 2.19.0 Apr 28 00:12:12.633633 ignition[801]: Stage: disks Apr 28 00:12:12.633883 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:12.633894 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:12.634993 ignition[801]: disks: disks passed Apr 28 00:12:12.635053 ignition[801]: Ignition finished successfully Apr 28 00:12:12.639722 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:12:12.640777 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:12:12.641639 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:12:12.643302 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:12:12.644354 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:12:12.645288 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:12:12.652967 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:12:12.668899 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 28 00:12:12.674317 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:12:12.683940 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:12:12.735697 kernel: EXT4-fs (sda9): mounted filesystem 2d8f83b6-5f3b-4fc5-b0f6-3405e8e67f7b r/w with ordered data mode. Quota mode: none. Apr 28 00:12:12.736132 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:12:12.738023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:12:12.751895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:12:12.756924 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:12:12.768863 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Apr 28 00:12:12.769133 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 28 00:12:12.769877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:12:12.769914 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:12:12.777414 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:12:12.777442 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:12:12.777454 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:12:12.776278 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:12:12.786007 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:12:12.793551 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:12:12.793629 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:12:12.802199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:12:12.831007 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:12:12.837348 coreos-metadata[819]: Apr 28 00:12:12.837 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 28 00:12:12.838944 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:12:12.840840 coreos-metadata[819]: Apr 28 00:12:12.839 INFO Fetch successful Apr 28 00:12:12.840840 coreos-metadata[819]: Apr 28 00:12:12.840 INFO wrote hostname ci-4081-3-7-n-651e172f95 to /sysroot/etc/hostname Apr 28 00:12:12.846001 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 28 00:12:12.850282 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:12:12.856756 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:12:12.963771 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:12:12.971902 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:12:12.975899 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:12:12.985692 kernel: BTRFS info (device sda6): last unmount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:12:13.006186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:12:13.011461 ignition[935]: INFO : Ignition 2.19.0 Apr 28 00:12:13.011461 ignition[935]: INFO : Stage: mount Apr 28 00:12:13.012743 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:13.012743 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:13.016368 ignition[935]: INFO : mount: mount passed Apr 28 00:12:13.016368 ignition[935]: INFO : Ignition finished successfully Apr 28 00:12:13.015728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:12:13.024182 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:12:13.139639 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:12:13.153109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:12:13.162697 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (947) Apr 28 00:12:13.164811 kernel: BTRFS info (device sda6): first mount of filesystem 57367c84-0f72-4cbc-90cb-9cf0a8258220 Apr 28 00:12:13.164859 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 28 00:12:13.164882 kernel: BTRFS info (device sda6): using free space tree Apr 28 00:12:13.168692 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 28 00:12:13.168779 kernel: BTRFS info (device sda6): auto enabling async discard Apr 28 00:12:13.172751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:12:13.195686 ignition[964]: INFO : Ignition 2.19.0 Apr 28 00:12:13.195686 ignition[964]: INFO : Stage: files Apr 28 00:12:13.195686 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:13.195686 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:13.198387 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:12:13.199964 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:12:13.199964 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:12:13.204468 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:12:13.206160 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:12:13.207639 unknown[964]: wrote ssh authorized keys file for user: core Apr 28 00:12:13.208935 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:12:13.211691 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 28 00:12:13.213486 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 28 00:12:13.289286 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:12:13.360362 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 28 00:12:13.360362 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 28 00:12:13.363028 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 28 00:12:13.557917 systemd-networkd[784]: eth1: Gained IPv6LL Apr 28 00:12:13.734531 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 28 00:12:14.069956 systemd-networkd[784]: eth0: Gained IPv6LL Apr 28 00:12:14.425669 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 28 00:12:14.425669 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:12:14.429723 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:12:14.429723 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:12:14.429723 ignition[964]: INFO : files: files passed Apr 28 00:12:14.429723 ignition[964]: INFO : Ignition finished successfully Apr 28 00:12:14.429288 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:12:14.437495 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:12:14.439046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:12:14.450316 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:12:14.450467 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:12:14.460077 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:12:14.460077 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:12:14.462991 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:12:14.466885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:12:14.469346 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:12:14.474970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:12:14.526110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:12:14.526303 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:12:14.528352 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:12:14.529045 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:12:14.529616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:12:14.542060 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:12:14.560063 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:12:14.567952 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:12:14.581096 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:12:14.582633 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:12:14.583651 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:12:14.584717 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:12:14.584924 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:12:14.586464 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:12:14.587643 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:12:14.588563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:12:14.589512 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:12:14.590555 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:12:14.591670 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:12:14.592731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:12:14.593983 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:12:14.595109 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:12:14.596032 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:12:14.596853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:12:14.597039 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:12:14.598212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:12:14.599392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:12:14.600344 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:12:14.601441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:12:14.603022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:12:14.603207 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:12:14.604807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:12:14.604981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:12:14.606443 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:12:14.606630 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:12:14.607762 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 28 00:12:14.607967 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 28 00:12:14.617288 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:12:14.620983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:12:14.621487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:12:14.621690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:12:14.624972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:12:14.625163 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:12:14.632424 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:12:14.633728 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:12:14.641386 ignition[1017]: INFO : Ignition 2.19.0 Apr 28 00:12:14.643872 ignition[1017]: INFO : Stage: umount Apr 28 00:12:14.643872 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:12:14.643872 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 28 00:12:14.648240 ignition[1017]: INFO : umount: umount passed Apr 28 00:12:14.648240 ignition[1017]: INFO : Ignition finished successfully Apr 28 00:12:14.647069 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:12:14.649001 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:12:14.649100 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:12:14.650541 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:12:14.650629 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:12:14.652771 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:12:14.652841 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:12:14.653415 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 28 00:12:14.653458 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 28 00:12:14.654461 systemd[1]: Stopped target network.target - Network. Apr 28 00:12:14.655549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:12:14.655609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:12:14.656752 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:12:14.657754 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:12:14.662906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:12:14.665456 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:12:14.666824 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:12:14.667960 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:12:14.668008 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:12:14.668989 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:12:14.669026 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:12:14.669953 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:12:14.670009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:12:14.670936 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:12:14.670985 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:12:14.672047 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:12:14.673168 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:12:14.674459 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:12:14.674561 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:12:14.676006 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:12:14.676097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:12:14.676756 systemd-networkd[784]: eth1: DHCPv6 lease lost Apr 28 00:12:14.678529 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:12:14.678642 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:12:14.679804 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 28 00:12:14.683006 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:12:14.683324 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:12:14.685564 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:12:14.685633 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:12:14.693235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:12:14.693891 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:12:14.693973 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:12:14.694806 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:12:14.694857 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:12:14.697738 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:12:14.697811 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:12:14.699213 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:12:14.699285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:12:14.700567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:12:14.715254 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:12:14.715447 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:12:14.717194 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:12:14.717333 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:12:14.719085 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:12:14.719144 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:12:14.720329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:12:14.720367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:12:14.721372 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:12:14.721422 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:12:14.722993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:12:14.723043 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:12:14.724555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:12:14.724604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:12:14.737030 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:12:14.739331 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:12:14.739483 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:12:14.740747 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:12:14.740815 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:12:14.742149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:12:14.742197 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:12:14.743507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:12:14.743604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:14.750196 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:12:14.750323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:12:14.752129 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:12:14.764549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:12:14.777104 systemd[1]: Switching root. Apr 28 00:12:14.817510 systemd-journald[237]: Journal stopped Apr 28 00:12:15.783544 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 28 00:12:15.783603 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:12:15.783616 kernel: SELinux: policy capability open_perms=1 Apr 28 00:12:15.783629 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:12:15.783639 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:12:15.783652 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:12:15.784720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:12:15.784741 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:12:15.784752 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:12:15.784762 kernel: audit: type=1403 audit(1777335134.973:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:12:15.784786 systemd[1]: Successfully loaded SELinux policy in 35.905ms. Apr 28 00:12:15.784814 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.057ms. Apr 28 00:12:15.784826 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:12:15.784838 systemd[1]: Detected virtualization kvm. Apr 28 00:12:15.784852 systemd[1]: Detected architecture arm64. Apr 28 00:12:15.784863 systemd[1]: Detected first boot. Apr 28 00:12:15.784873 systemd[1]: Hostname set to . Apr 28 00:12:15.784888 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:12:15.784898 zram_generator::config[1059]: No configuration found. Apr 28 00:12:15.784909 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:12:15.784919 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:12:15.784932 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:12:15.784943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:12:15.784955 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:12:15.784966 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:12:15.784980 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:12:15.784991 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:12:15.785002 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:12:15.785013 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:12:15.785023 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:12:15.785035 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:12:15.785046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:12:15.785057 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:12:15.785068 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:12:15.785078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:12:15.785089 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:12:15.785100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:12:15.785110 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 28 00:12:15.785121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:12:15.785133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:12:15.785143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:12:15.785154 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:12:15.785166 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:12:15.785177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:12:15.785188 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:12:15.785201 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:12:15.785212 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:12:15.785223 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:12:15.785233 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:12:15.785244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:12:15.785255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:12:15.785265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:12:15.785276 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:12:15.785287 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:12:15.785299 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:12:15.785309 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:12:15.785324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:12:15.785335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:12:15.785345 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:12:15.785356 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:12:15.785367 systemd[1]: Reached target machines.target - Containers. Apr 28 00:12:15.785377 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:12:15.785388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:12:15.785400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:12:15.785413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:12:15.785428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:12:15.785441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:12:15.785451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:12:15.785464 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:12:15.785475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:12:15.785486 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:12:15.785496 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:12:15.785507 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:12:15.785517 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:12:15.785528 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:12:15.785538 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:12:15.785549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:12:15.785562 kernel: loop: module loaded Apr 28 00:12:15.785572 kernel: fuse: init (API version 7.39) Apr 28 00:12:15.785582 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:12:15.785593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:12:15.785604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:12:15.785615 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 00:12:15.785628 systemd[1]: Stopped verity-setup.service. Apr 28 00:12:15.785641 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:12:15.786900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:12:15.786947 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:12:15.786963 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:12:15.786976 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:12:15.786987 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:12:15.786998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:12:15.787017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:12:15.787028 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:12:15.787039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:12:15.787049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:12:15.787060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:12:15.787070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:12:15.787083 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:12:15.787094 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:12:15.787105 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:12:15.787115 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:12:15.787126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:12:15.787137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:12:15.787148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:12:15.787159 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:12:15.787203 systemd-journald[1129]: Collecting audit messages is disabled. Apr 28 00:12:15.787233 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:12:15.787245 systemd-journald[1129]: Journal started Apr 28 00:12:15.787304 systemd-journald[1129]: Runtime Journal (/run/log/journal/01edfbbc3cfd4a92b295fae22270f9b4) is 8.0M, max 76.6M, 68.6M free. Apr 28 00:12:15.791709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:12:15.459746 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:12:15.479444 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 28 00:12:15.479979 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:12:15.798724 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:12:15.801130 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:12:15.801223 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:12:15.805696 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:12:15.813820 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:12:15.821687 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:12:15.821822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:12:15.828686 kernel: ACPI: bus type drm_connector registered Apr 28 00:12:15.828826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:12:15.831837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:12:15.845079 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:12:15.847792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:12:15.851501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:12:15.856838 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:12:15.861859 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:12:15.870210 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:12:15.871302 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:12:15.871504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:12:15.873787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:12:15.884008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:12:15.885651 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:12:15.888711 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:12:15.899252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:12:15.919017 kernel: loop0: detected capacity change from 0 to 114432 Apr 28 00:12:15.918506 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:12:15.924970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:12:15.933006 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:12:15.938253 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:12:15.944065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:12:15.961944 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:12:15.971516 systemd-journald[1129]: Time spent on flushing to /var/log/journal/01edfbbc3cfd4a92b295fae22270f9b4 is 33.731ms for 1132 entries. Apr 28 00:12:15.971516 systemd-journald[1129]: System Journal (/var/log/journal/01edfbbc3cfd4a92b295fae22270f9b4) is 8.0M, max 584.8M, 576.8M free. Apr 28 00:12:16.019445 systemd-journald[1129]: Received client request to flush runtime journal. Apr 28 00:12:16.019491 kernel: loop1: detected capacity change from 0 to 114328 Apr 28 00:12:15.986732 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:12:15.991146 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:12:15.998240 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Apr 28 00:12:15.998252 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Apr 28 00:12:16.020555 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 00:12:16.025943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:12:16.029713 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:12:16.041025 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:12:16.044721 kernel: loop2: detected capacity change from 0 to 197488 Apr 28 00:12:16.101214 kernel: loop3: detected capacity change from 0 to 8 Apr 28 00:12:16.101653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:12:16.115389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:12:16.121715 kernel: loop4: detected capacity change from 0 to 114432 Apr 28 00:12:16.137841 kernel: loop5: detected capacity change from 0 to 114328 Apr 28 00:12:16.140400 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 28 00:12:16.140424 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 28 00:12:16.145373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:12:16.156054 kernel: loop6: detected capacity change from 0 to 197488 Apr 28 00:12:16.181023 kernel: loop7: detected capacity change from 0 to 8 Apr 28 00:12:16.181433 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 28 00:12:16.182913 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 28 00:12:16.188985 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:12:16.189100 systemd[1]: Reloading... Apr 28 00:12:16.332713 zram_generator::config[1228]: No configuration found. Apr 28 00:12:16.424699 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:12:16.480474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:12:16.526923 systemd[1]: Reloading finished in 336 ms. Apr 28 00:12:16.570701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:12:16.572453 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:12:16.582076 systemd[1]: Starting ensure-sysext.service... Apr 28 00:12:16.585862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:12:16.594995 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:12:16.595144 systemd[1]: Reloading... Apr 28 00:12:16.632517 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:12:16.632888 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:12:16.633608 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:12:16.634293 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 28 00:12:16.634340 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 28 00:12:16.638297 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:12:16.638314 systemd-tmpfiles[1266]: Skipping /boot Apr 28 00:12:16.652044 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:12:16.652060 systemd-tmpfiles[1266]: Skipping /boot Apr 28 00:12:16.672699 zram_generator::config[1288]: No configuration found. Apr 28 00:12:16.787254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:12:16.834078 systemd[1]: Reloading finished in 238 ms. Apr 28 00:12:16.853487 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:12:16.864541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:12:16.889203 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:12:16.895035 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:12:16.900782 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:12:16.907102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:12:16.912049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:12:16.919020 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:12:16.923978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:12:16.930999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:12:16.937038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:12:16.948486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:12:16.950845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:12:16.951731 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:12:16.956284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:12:16.956477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:12:16.968774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:12:16.974528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:12:16.975938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:12:16.986053 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:12:16.990496 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:12:16.994707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:12:16.997175 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:12:16.999370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:12:17.000706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:12:17.002084 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:12:17.002226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:12:17.004209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:12:17.004344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:12:17.020543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:12:17.024004 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Apr 28 00:12:17.032056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:12:17.036137 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:12:17.041876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:12:17.046738 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:12:17.047791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:12:17.048911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:12:17.051715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:12:17.061212 systemd[1]: Finished ensure-sysext.service. Apr 28 00:12:17.063951 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:12:17.068412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:12:17.069766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:12:17.072066 augenrules[1371]: No rules Apr 28 00:12:17.082529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:12:17.093892 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:12:17.096203 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:12:17.110611 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:12:17.130995 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:12:17.131177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:12:17.132223 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:12:17.132360 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:12:17.133574 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:12:17.139216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:12:17.142852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:12:17.144018 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:12:17.181708 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 28 00:12:17.250162 systemd-resolved[1338]: Positive Trust Anchors: Apr 28 00:12:17.250178 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:12:17.250212 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:12:17.259213 systemd-resolved[1338]: Using system hostname 'ci-4081-3-7-n-651e172f95'. Apr 28 00:12:17.263608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:12:17.264900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:12:17.267473 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:12:17.268482 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:12:17.270528 systemd-networkd[1387]: lo: Link UP Apr 28 00:12:17.271351 systemd-networkd[1387]: lo: Gained carrier Apr 28 00:12:17.272182 systemd-networkd[1387]: Enumeration completed Apr 28 00:12:17.272706 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:12:17.273493 systemd[1]: Reached target network.target - Network. Apr 28 00:12:17.274303 systemd-timesyncd[1391]: No network connectivity, watching for changes. Apr 28 00:12:17.280951 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:12:17.336547 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:17.336556 systemd-networkd[1387]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:12:17.339353 systemd-networkd[1387]: eth1: Link UP Apr 28 00:12:17.339360 systemd-networkd[1387]: eth1: Gained carrier Apr 28 00:12:17.339380 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:17.372840 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:12:17.382059 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 28 00:12:17.382192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:12:17.382366 systemd-networkd[1387]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 28 00:12:17.383443 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:17.388030 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:17.388040 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:12:17.388641 systemd-networkd[1387]: eth0: Link UP Apr 28 00:12:17.388645 systemd-networkd[1387]: eth0: Gained carrier Apr 28 00:12:17.388675 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:12:17.407090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:12:17.412013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:12:17.416084 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:17.420972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:12:17.421629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:12:17.421734 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:12:17.422371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:12:17.422614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:12:17.443013 systemd-networkd[1387]: eth0: DHCPv4 address 178.105.25.61/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 28 00:12:17.444404 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:17.445723 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1386) Apr 28 00:12:17.448613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:12:17.452025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:12:17.453363 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:12:17.453554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:12:17.457169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:12:17.457224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:12:17.464309 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 28 00:12:17.464384 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 28 00:12:17.464396 kernel: [drm] features: -context_init Apr 28 00:12:17.475973 kernel: [drm] number of scanouts: 1 Apr 28 00:12:17.476086 kernel: [drm] number of cap sets: 0 Apr 28 00:12:17.482814 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 28 00:12:17.487191 kernel: Console: switching to colour frame buffer device 160x50 Apr 28 00:12:17.508693 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 28 00:12:17.544040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:12:17.557082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:12:17.558112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:17.561579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 28 00:12:17.569991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:12:17.573928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:12:17.587465 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:12:17.643793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:12:17.693678 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:12:17.703022 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:12:17.717356 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:12:17.748811 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:12:17.750515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:12:17.751594 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:12:17.752737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:12:17.753796 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:12:17.754867 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:12:17.755604 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:12:17.756536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:12:17.757357 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:12:17.757394 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:12:17.757981 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:12:17.760792 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:12:17.763739 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:12:17.770646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:12:17.773324 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:12:17.774623 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:12:17.775411 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:12:17.776134 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:12:17.776729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:12:17.776776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:12:17.780950 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:12:17.783790 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:12:17.785316 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 28 00:12:17.791331 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:12:17.793953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:12:17.797918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:12:17.799049 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:12:17.801005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:12:17.806871 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:12:17.810009 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 28 00:12:17.814010 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:12:17.817015 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:12:17.825305 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:12:17.828500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:12:17.829068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:12:17.831961 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:12:17.846139 jq[1462]: false Apr 28 00:12:17.853284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:12:17.855886 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:12:17.868482 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:12:17.869212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:12:17.882356 dbus-daemon[1461]: [system] SELinux support is enabled Apr 28 00:12:17.883172 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:12:17.889846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:12:17.889903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:12:17.891048 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:12:17.891078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:12:17.895703 jq[1473]: true Apr 28 00:12:17.907101 coreos-metadata[1460]: Apr 28 00:12:17.897 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 28 00:12:17.907101 coreos-metadata[1460]: Apr 28 00:12:17.897 INFO Fetch successful Apr 28 00:12:17.907101 coreos-metadata[1460]: Apr 28 00:12:17.897 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 28 00:12:17.907101 coreos-metadata[1460]: Apr 28 00:12:17.897 INFO Fetch successful Apr 28 00:12:17.916982 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:12:17.917168 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:12:17.928472 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:12:17.953418 jq[1492]: true Apr 28 00:12:17.957155 tar[1480]: linux-arm64/LICENSE Apr 28 00:12:17.960157 tar[1480]: linux-arm64/helm Apr 28 00:12:17.959949 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:12:17.960675 extend-filesystems[1463]: Found loop4 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found loop5 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found loop6 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found loop7 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found sda Apr 28 00:12:17.960675 extend-filesystems[1463]: Found sda1 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found sda2 Apr 28 00:12:17.960675 extend-filesystems[1463]: Found sda3 Apr 28 00:12:17.986554 extend-filesystems[1463]: Found usr Apr 28 00:12:17.986554 extend-filesystems[1463]: Found sda4 Apr 28 00:12:17.986554 extend-filesystems[1463]: Found sda6 Apr 28 00:12:17.986554 extend-filesystems[1463]: Found sda7 Apr 28 00:12:17.986554 extend-filesystems[1463]: Found sda9 Apr 28 00:12:17.986554 extend-filesystems[1463]: Checking size of /dev/sda9 Apr 28 00:12:18.004831 update_engine[1472]: I20260428 00:12:17.972447 1472 main.cc:92] Flatcar Update Engine starting Apr 28 00:12:18.004831 update_engine[1472]: I20260428 00:12:17.988587 1472 update_check_scheduler.cc:74] Next update check in 10m32s Apr 28 00:12:17.960733 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:12:17.988397 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:12:18.009963 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:12:18.026691 extend-filesystems[1463]: Resized partition /dev/sda9 Apr 28 00:12:18.034911 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:12:18.051674 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 28 00:12:18.072923 systemd-logind[1471]: New seat seat0. Apr 28 00:12:18.074963 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (Power Button) Apr 28 00:12:18.074986 systemd-logind[1471]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 28 00:12:18.077101 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:12:18.087846 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 28 00:12:18.089054 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:12:18.161682 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1389) Apr 28 00:12:18.191423 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:12:18.196981 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:12:18.213996 systemd[1]: Starting sshkeys.service... Apr 28 00:12:18.240090 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 28 00:12:18.251766 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 28 00:12:18.279681 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 28 00:12:18.286951 coreos-metadata[1540]: Apr 28 00:12:18.286 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 28 00:12:18.313768 coreos-metadata[1540]: Apr 28 00:12:18.289 INFO Fetch successful Apr 28 00:12:18.315026 unknown[1540]: wrote ssh authorized keys file for user: core Apr 28 00:12:18.315333 extend-filesystems[1513]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 28 00:12:18.315333 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 28 00:12:18.315333 extend-filesystems[1513]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 28 00:12:18.322584 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Apr 28 00:12:18.322584 extend-filesystems[1463]: Found sr0 Apr 28 00:12:18.317080 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:12:18.319501 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:12:18.338089 containerd[1486]: time="2026-04-28T00:12:18.336371120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:12:18.358577 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:12:18.360496 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 28 00:12:18.367826 systemd[1]: Finished sshkeys.service. Apr 28 00:12:18.380816 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:12:18.421797 systemd-networkd[1387]: eth0: Gained IPv6LL Apr 28 00:12:18.422387 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:18.427018 containerd[1486]: time="2026-04-28T00:12:18.426972040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.427712 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:12:18.429596 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:12:18.432881 containerd[1486]: time="2026-04-28T00:12:18.432836880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.432971200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.432993560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.433162560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.433184880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.433251480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:12:18.433321 containerd[1486]: time="2026-04-28T00:12:18.433263760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.434787 containerd[1486]: time="2026-04-28T00:12:18.433591240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:12:18.434787 containerd[1486]: time="2026-04-28T00:12:18.433616840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.434787 containerd[1486]: time="2026-04-28T00:12:18.433631360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:12:18.434787 containerd[1486]: time="2026-04-28T00:12:18.433641320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.435818 containerd[1486]: time="2026-04-28T00:12:18.435790920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.437927 containerd[1486]: time="2026-04-28T00:12:18.437904080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:12:18.438193 containerd[1486]: time="2026-04-28T00:12:18.438174600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:12:18.438265 containerd[1486]: time="2026-04-28T00:12:18.438252960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:12:18.438453 containerd[1486]: time="2026-04-28T00:12:18.438435120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:12:18.438630 containerd[1486]: time="2026-04-28T00:12:18.438611760Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:12:18.440090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:18.444146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447411040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447482840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447502320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447527040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447543000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.447787040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448048600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448155200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448172960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448185960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448199920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448219200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448231640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.448694 containerd[1486]: time="2026-04-28T00:12:18.448248360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448265640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448290320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448304160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448316720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448339000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448353680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448365840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448379920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448392640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448405920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448417640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448431160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448444600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449104 containerd[1486]: time="2026-04-28T00:12:18.448459800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448474560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448488640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448505520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448524120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448544960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448557280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.449338 containerd[1486]: time="2026-04-28T00:12:18.448569040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:12:18.452700 containerd[1486]: time="2026-04-28T00:12:18.451378800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:12:18.452808 containerd[1486]: time="2026-04-28T00:12:18.451421920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:12:18.452957 containerd[1486]: time="2026-04-28T00:12:18.452851200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:12:18.452957 containerd[1486]: time="2026-04-28T00:12:18.452894960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:12:18.452957 containerd[1486]: time="2026-04-28T00:12:18.452906680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.452957 containerd[1486]: time="2026-04-28T00:12:18.452927560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:12:18.452957 containerd[1486]: time="2026-04-28T00:12:18.452939360Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:12:18.453671 containerd[1486]: time="2026-04-28T00:12:18.453091880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:12:18.453720 containerd[1486]: time="2026-04-28T00:12:18.453525080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:12:18.453720 containerd[1486]: time="2026-04-28T00:12:18.453602880Z" level=info msg="Connect containerd service" Apr 28 00:12:18.454015 containerd[1486]: time="2026-04-28T00:12:18.453656440Z" level=info msg="using legacy CRI server" Apr 28 00:12:18.454015 containerd[1486]: time="2026-04-28T00:12:18.453954160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:12:18.454171 containerd[1486]: time="2026-04-28T00:12:18.454146480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:12:18.458920 containerd[1486]: time="2026-04-28T00:12:18.458455960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:12:18.459823 containerd[1486]: time="2026-04-28T00:12:18.459736600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:12:18.462868 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462227400Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.461021240Z" level=info msg="Start subscribing containerd event" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462314800Z" level=info msg="Start recovering state" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462403040Z" level=info msg="Start event monitor" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462415520Z" level=info msg="Start snapshots syncer" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462429480Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462542320Z" level=info msg="Start streaming server" Apr 28 00:12:18.464079 containerd[1486]: time="2026-04-28T00:12:18.462811800Z" level=info msg="containerd successfully booted in 0.132281s" Apr 28 00:12:18.504314 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:12:18.978318 tar[1480]: linux-arm64/README.md Apr 28 00:12:18.999578 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:12:19.118586 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:12:19.142356 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:12:19.151298 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:12:19.159548 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:12:19.159794 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:12:19.171295 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:12:19.189723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:12:19.199556 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:12:19.209106 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 28 00:12:19.210008 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:12:19.318007 systemd-networkd[1387]: eth1: Gained IPv6LL Apr 28 00:12:19.318917 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:19.396005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:19.397307 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:12:19.401783 systemd[1]: Startup finished in 798ms (kernel) + 5.285s (initrd) + 4.463s (userspace) = 10.548s. Apr 28 00:12:19.412227 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:12:19.852950 kubelet[1591]: E0428 00:12:19.852868 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:12:19.856116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:12:19.856272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:12:29.883815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:12:29.897062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:30.029965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:30.044277 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:12:30.090143 kubelet[1610]: E0428 00:12:30.090063 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:12:30.093839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:12:30.093983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:12:39.296874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:12:39.306476 systemd[1]: Started sshd@0-178.105.25.61:22-50.85.169.122:42144.service - OpenSSH per-connection server daemon (50.85.169.122:42144). Apr 28 00:12:39.435699 sshd[1619]: Accepted publickey for core from 50.85.169.122 port 42144 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:39.438959 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:39.449143 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:12:39.464196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:12:39.471735 systemd-logind[1471]: New session 1 of user core. Apr 28 00:12:39.482409 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:12:39.497232 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:12:39.502211 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:12:39.611938 systemd[1623]: Queued start job for default target default.target. Apr 28 00:12:39.624527 systemd[1623]: Created slice app.slice - User Application Slice. Apr 28 00:12:39.624604 systemd[1623]: Reached target paths.target - Paths. Apr 28 00:12:39.624632 systemd[1623]: Reached target timers.target - Timers. Apr 28 00:12:39.627201 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:12:39.642084 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:12:39.642227 systemd[1623]: Reached target sockets.target - Sockets. Apr 28 00:12:39.642246 systemd[1623]: Reached target basic.target - Basic System. Apr 28 00:12:39.642297 systemd[1623]: Reached target default.target - Main User Target. Apr 28 00:12:39.642331 systemd[1623]: Startup finished in 132ms. Apr 28 00:12:39.642945 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:12:39.654943 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:12:39.783411 systemd[1]: Started sshd@1-178.105.25.61:22-50.85.169.122:42156.service - OpenSSH per-connection server daemon (50.85.169.122:42156). Apr 28 00:12:39.903602 sshd[1634]: Accepted publickey for core from 50.85.169.122 port 42156 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:39.905025 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:39.910750 systemd-logind[1471]: New session 2 of user core. Apr 28 00:12:39.922013 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:12:40.027326 sshd[1634]: pam_unix(sshd:session): session closed for user core Apr 28 00:12:40.032207 systemd[1]: sshd@1-178.105.25.61:22-50.85.169.122:42156.service: Deactivated successfully. Apr 28 00:12:40.035060 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:12:40.037755 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:12:40.039186 systemd-logind[1471]: Removed session 2. Apr 28 00:12:40.056153 systemd[1]: Started sshd@2-178.105.25.61:22-50.85.169.122:42160.service - OpenSSH per-connection server daemon (50.85.169.122:42160). Apr 28 00:12:40.133896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:12:40.140977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:40.180112 sshd[1641]: Accepted publickey for core from 50.85.169.122 port 42160 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:40.181421 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:40.191736 systemd-logind[1471]: New session 3 of user core. Apr 28 00:12:40.195891 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:12:40.268373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:40.281272 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:12:40.294921 sshd[1641]: pam_unix(sshd:session): session closed for user core Apr 28 00:12:40.302910 systemd[1]: sshd@2-178.105.25.61:22-50.85.169.122:42160.service: Deactivated successfully. Apr 28 00:12:40.305727 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:12:40.306369 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:12:40.308064 systemd-logind[1471]: Removed session 3. Apr 28 00:12:40.324966 systemd[1]: Started sshd@3-178.105.25.61:22-50.85.169.122:42170.service - OpenSSH per-connection server daemon (50.85.169.122:42170). Apr 28 00:12:40.345209 kubelet[1652]: E0428 00:12:40.345165 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:12:40.348227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:12:40.348433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:12:40.449967 sshd[1662]: Accepted publickey for core from 50.85.169.122 port 42170 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:40.453095 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:40.459033 systemd-logind[1471]: New session 4 of user core. Apr 28 00:12:40.468144 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:12:40.570191 sshd[1662]: pam_unix(sshd:session): session closed for user core Apr 28 00:12:40.575713 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:12:40.576111 systemd[1]: sshd@3-178.105.25.61:22-50.85.169.122:42170.service: Deactivated successfully. Apr 28 00:12:40.578683 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:12:40.581204 systemd-logind[1471]: Removed session 4. Apr 28 00:12:40.600033 systemd[1]: Started sshd@4-178.105.25.61:22-50.85.169.122:42174.service - OpenSSH per-connection server daemon (50.85.169.122:42174). Apr 28 00:12:40.721449 sshd[1670]: Accepted publickey for core from 50.85.169.122 port 42174 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:40.722580 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:40.727270 systemd-logind[1471]: New session 5 of user core. Apr 28 00:12:40.738011 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:12:40.834778 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 00:12:40.835092 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:12:40.850856 sudo[1673]: pam_unix(sudo:session): session closed for user root Apr 28 00:12:40.868467 sshd[1670]: pam_unix(sshd:session): session closed for user core Apr 28 00:12:40.873993 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:12:40.875302 systemd[1]: sshd@4-178.105.25.61:22-50.85.169.122:42174.service: Deactivated successfully. Apr 28 00:12:40.877463 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:12:40.878723 systemd-logind[1471]: Removed session 5. Apr 28 00:12:40.898343 systemd[1]: Started sshd@5-178.105.25.61:22-50.85.169.122:42184.service - OpenSSH per-connection server daemon (50.85.169.122:42184). Apr 28 00:12:41.022310 sshd[1678]: Accepted publickey for core from 50.85.169.122 port 42184 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:41.025401 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:41.031321 systemd-logind[1471]: New session 6 of user core. Apr 28 00:12:41.043381 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:12:41.126422 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 00:12:41.126746 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:12:41.130767 sudo[1682]: pam_unix(sudo:session): session closed for user root Apr 28 00:12:41.136301 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 00:12:41.136691 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:12:41.159407 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 00:12:41.161889 auditctl[1685]: No rules Apr 28 00:12:41.162514 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:12:41.162851 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 00:12:41.166078 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:12:41.192998 augenrules[1703]: No rules Apr 28 00:12:41.194649 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:12:41.197368 sudo[1681]: pam_unix(sudo:session): session closed for user root Apr 28 00:12:41.213102 sshd[1678]: pam_unix(sshd:session): session closed for user core Apr 28 00:12:41.219016 systemd[1]: sshd@5-178.105.25.61:22-50.85.169.122:42184.service: Deactivated successfully. Apr 28 00:12:41.221208 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:12:41.222015 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:12:41.223119 systemd-logind[1471]: Removed session 6. Apr 28 00:12:41.248465 systemd[1]: Started sshd@6-178.105.25.61:22-50.85.169.122:42188.service - OpenSSH per-connection server daemon (50.85.169.122:42188). Apr 28 00:12:41.372381 sshd[1711]: Accepted publickey for core from 50.85.169.122 port 42188 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:12:41.374323 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:12:41.380122 systemd-logind[1471]: New session 7 of user core. Apr 28 00:12:41.386005 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:12:41.473742 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:12:41.474039 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:12:41.778599 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:12:41.779369 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:12:42.035831 dockerd[1730]: time="2026-04-28T00:12:42.035595400Z" level=info msg="Starting up" Apr 28 00:12:42.135215 dockerd[1730]: time="2026-04-28T00:12:42.135148080Z" level=info msg="Loading containers: start." Apr 28 00:12:42.232795 kernel: Initializing XFRM netlink socket Apr 28 00:12:42.259513 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Apr 28 00:12:42.269077 systemd-timesyncd[1391]: Contacted time server 94.130.184.193:123 (2.flatcar.pool.ntp.org). Apr 28 00:12:42.269139 systemd-timesyncd[1391]: Initial clock synchronization to Tue 2026-04-28 00:12:42.385467 UTC. Apr 28 00:12:42.309553 systemd-networkd[1387]: docker0: Link UP Apr 28 00:12:42.329688 dockerd[1730]: time="2026-04-28T00:12:42.329305280Z" level=info msg="Loading containers: done." Apr 28 00:12:42.348437 dockerd[1730]: time="2026-04-28T00:12:42.348293320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:12:42.348655 dockerd[1730]: time="2026-04-28T00:12:42.348433120Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:12:42.348655 dockerd[1730]: time="2026-04-28T00:12:42.348644080Z" level=info msg="Daemon has completed initialization" Apr 28 00:12:42.348896 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4114047061-merged.mount: Deactivated successfully. Apr 28 00:12:42.395024 dockerd[1730]: time="2026-04-28T00:12:42.394009000Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:12:42.394367 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:12:42.855378 containerd[1486]: time="2026-04-28T00:12:42.855337920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 28 00:12:43.423833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884928884.mount: Deactivated successfully. Apr 28 00:12:44.615778 containerd[1486]: time="2026-04-28T00:12:44.615698197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:44.617550 containerd[1486]: time="2026-04-28T00:12:44.617093044Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608883" Apr 28 00:12:44.620700 containerd[1486]: time="2026-04-28T00:12:44.618804195Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:44.624456 containerd[1486]: time="2026-04-28T00:12:44.624397923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:44.626987 containerd[1486]: time="2026-04-28T00:12:44.626930945Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 1.771547775s" Apr 28 00:12:44.626987 containerd[1486]: time="2026-04-28T00:12:44.626987133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 28 00:12:44.630247 containerd[1486]: time="2026-04-28T00:12:44.630209152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 28 00:12:45.911741 containerd[1486]: time="2026-04-28T00:12:45.911649807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:45.914084 containerd[1486]: time="2026-04-28T00:12:45.914034124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073314" Apr 28 00:12:45.915569 containerd[1486]: time="2026-04-28T00:12:45.915453510Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:45.920080 containerd[1486]: time="2026-04-28T00:12:45.920015664Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.289655586s" Apr 28 00:12:45.920080 containerd[1486]: time="2026-04-28T00:12:45.920060560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 28 00:12:45.920252 containerd[1486]: time="2026-04-28T00:12:45.920174821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:45.920814 containerd[1486]: time="2026-04-28T00:12:45.920770677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 28 00:12:46.873702 containerd[1486]: time="2026-04-28T00:12:46.871792765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:46.873702 containerd[1486]: time="2026-04-28T00:12:46.873540912Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800856" Apr 28 00:12:46.874013 containerd[1486]: time="2026-04-28T00:12:46.873977736Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:46.878818 containerd[1486]: time="2026-04-28T00:12:46.878779898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:46.880198 containerd[1486]: time="2026-04-28T00:12:46.880156208Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 959.351895ms" Apr 28 00:12:46.880273 containerd[1486]: time="2026-04-28T00:12:46.880199547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 28 00:12:46.880684 containerd[1486]: time="2026-04-28T00:12:46.880630960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 28 00:12:47.859847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111845919.mount: Deactivated successfully. Apr 28 00:12:48.085333 containerd[1486]: time="2026-04-28T00:12:48.084624030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:48.086866 containerd[1486]: time="2026-04-28T00:12:48.086837053Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340610" Apr 28 00:12:48.087904 containerd[1486]: time="2026-04-28T00:12:48.087839855Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:48.090384 containerd[1486]: time="2026-04-28T00:12:48.090335214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:48.091397 containerd[1486]: time="2026-04-28T00:12:48.091364734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.209775856s" Apr 28 00:12:48.091495 containerd[1486]: time="2026-04-28T00:12:48.091478095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 28 00:12:48.092078 containerd[1486]: time="2026-04-28T00:12:48.092049536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 28 00:12:48.587948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803394895.mount: Deactivated successfully. Apr 28 00:12:49.548475 containerd[1486]: time="2026-04-28T00:12:49.546931142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:49.551102 containerd[1486]: time="2026-04-28T00:12:49.551053656Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 28 00:12:49.552644 containerd[1486]: time="2026-04-28T00:12:49.552524697Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:49.556685 containerd[1486]: time="2026-04-28T00:12:49.556428349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:49.558067 containerd[1486]: time="2026-04-28T00:12:49.557793382Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.465670906s" Apr 28 00:12:49.558067 containerd[1486]: time="2026-04-28T00:12:49.557848983Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 28 00:12:49.558396 containerd[1486]: time="2026-04-28T00:12:49.558373995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:12:50.014425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317840855.mount: Deactivated successfully. Apr 28 00:12:50.028157 containerd[1486]: time="2026-04-28T00:12:50.028052637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:50.029901 containerd[1486]: time="2026-04-28T00:12:50.029851757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 28 00:12:50.031033 containerd[1486]: time="2026-04-28T00:12:50.030982151Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:50.035754 containerd[1486]: time="2026-04-28T00:12:50.034854753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:50.036240 containerd[1486]: time="2026-04-28T00:12:50.036147229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 477.683578ms" Apr 28 00:12:50.036240 containerd[1486]: time="2026-04-28T00:12:50.036186452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 28 00:12:50.037467 containerd[1486]: time="2026-04-28T00:12:50.037159792Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 28 00:12:50.383783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 28 00:12:50.390223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:50.532282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:50.543330 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:12:50.552208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475382259.mount: Deactivated successfully. Apr 28 00:12:50.593124 kubelet[2014]: E0428 00:12:50.593072 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:12:50.596206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:12:50.596345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:12:51.288150 containerd[1486]: time="2026-04-28T00:12:51.288056503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:51.290173 containerd[1486]: time="2026-04-28T00:12:51.290116249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752394" Apr 28 00:12:51.291452 containerd[1486]: time="2026-04-28T00:12:51.291368168Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:51.295340 containerd[1486]: time="2026-04-28T00:12:51.295281592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:12:51.297064 containerd[1486]: time="2026-04-28T00:12:51.297009564Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.259808663s" Apr 28 00:12:51.297064 containerd[1486]: time="2026-04-28T00:12:51.297048759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 28 00:12:53.967717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:53.978508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:54.014736 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Apr 28 00:12:54.014756 systemd[1]: Reloading... Apr 28 00:12:54.119811 zram_generator::config[2142]: No configuration found. Apr 28 00:12:54.225569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:12:54.296007 systemd[1]: Reloading finished in 280 ms. Apr 28 00:12:54.356496 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 28 00:12:54.356644 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 28 00:12:54.357051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:54.364690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:12:54.510950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:12:54.511084 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:12:54.552475 kubelet[2190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:12:55.075953 kubelet[2190]: I0428 00:12:55.075887 2190 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 28 00:12:55.075953 kubelet[2190]: I0428 00:12:55.075945 2190 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:12:55.075953 kubelet[2190]: I0428 00:12:55.075963 2190 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:12:55.076150 kubelet[2190]: I0428 00:12:55.075970 2190 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:12:55.076351 kubelet[2190]: I0428 00:12:55.076336 2190 server.go:951] "Client rotation is on, will bootstrap in background" Apr 28 00:12:55.086278 kubelet[2190]: E0428 00:12:55.086116 2190 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.25.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.25.61:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:12:55.089466 kubelet[2190]: I0428 00:12:55.089413 2190 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:12:55.095732 kubelet[2190]: E0428 00:12:55.094036 2190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:12:55.095732 kubelet[2190]: I0428 00:12:55.094118 2190 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:12:55.097000 kubelet[2190]: I0428 00:12:55.096976 2190 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:12:55.098423 kubelet[2190]: I0428 00:12:55.098385 2190 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:12:55.098738 kubelet[2190]: I0428 00:12:55.098544 2190 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-n-651e172f95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:12:55.098875 kubelet[2190]: I0428 00:12:55.098861 2190 topology_manager.go:143] "Creating topology manager with none policy" Apr 28 00:12:55.098985 kubelet[2190]: I0428 00:12:55.098974 2190 container_manager_linux.go:308] "Creating device plugin manager" Apr 28 00:12:55.099142 kubelet[2190]: I0428 00:12:55.099129 2190 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:12:55.101418 kubelet[2190]: I0428 00:12:55.101390 2190 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 28 00:12:55.101897 kubelet[2190]: I0428 00:12:55.101879 2190 kubelet.go:482] "Attempting to sync node with API server" Apr 28 00:12:55.102070 kubelet[2190]: I0428 00:12:55.102054 2190 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:12:55.102167 kubelet[2190]: I0428 00:12:55.102155 2190 kubelet.go:394] "Adding apiserver pod source" Apr 28 00:12:55.102246 kubelet[2190]: I0428 00:12:55.102234 2190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:12:55.106566 kubelet[2190]: I0428 00:12:55.106547 2190 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:12:55.107837 kubelet[2190]: I0428 00:12:55.107813 2190 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:12:55.107942 kubelet[2190]: I0428 00:12:55.107920 2190 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:12:55.108062 kubelet[2190]: W0428 00:12:55.108050 2190 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:12:55.111802 kubelet[2190]: I0428 00:12:55.111601 2190 server.go:1257] "Started kubelet" Apr 28 00:12:55.113349 kubelet[2190]: I0428 00:12:55.113323 2190 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 28 00:12:55.119521 kubelet[2190]: E0428 00:12:55.117992 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.25.61:6443/api/v1/namespaces/default/events\": dial tcp 178.105.25.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-n-651e172f95.18aa5ceccf54f080 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-n-651e172f95,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-n-651e172f95,},FirstTimestamp:2026-04-28 00:12:55.111569536 +0000 UTC m=+0.594277083,LastTimestamp:2026-04-28 00:12:55.111569536 +0000 UTC m=+0.594277083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-n-651e172f95,}" Apr 28 00:12:55.120507 kubelet[2190]: I0428 00:12:55.119828 2190 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:12:55.121072 kubelet[2190]: I0428 00:12:55.120959 2190 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:12:55.123926 kubelet[2190]: I0428 00:12:55.123903 2190 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 28 00:12:55.124597 kubelet[2190]: E0428 00:12:55.124277 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:55.124597 kubelet[2190]: I0428 00:12:55.124552 2190 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:12:55.124718 kubelet[2190]: I0428 00:12:55.124613 2190 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:12:55.124845 kubelet[2190]: I0428 00:12:55.124821 2190 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:12:55.125716 kubelet[2190]: I0428 00:12:55.125127 2190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:12:55.125716 kubelet[2190]: I0428 00:12:55.125448 2190 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:12:55.125716 kubelet[2190]: I0428 00:12:55.125499 2190 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:12:55.128281 kubelet[2190]: E0428 00:12:55.128239 2190 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": dial tcp 178.105.25.61:6443: connect: connection refused" interval="200ms" Apr 28 00:12:55.130377 kubelet[2190]: I0428 00:12:55.130345 2190 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:12:55.131062 kubelet[2190]: I0428 00:12:55.131033 2190 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:12:55.137806 kubelet[2190]: E0428 00:12:55.137773 2190 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:12:55.138391 kubelet[2190]: I0428 00:12:55.138361 2190 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:12:55.150786 kubelet[2190]: I0428 00:12:55.150746 2190 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:12:55.153024 kubelet[2190]: I0428 00:12:55.152993 2190 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:12:55.153198 kubelet[2190]: I0428 00:12:55.153181 2190 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 28 00:12:55.153271 kubelet[2190]: I0428 00:12:55.153262 2190 kubelet.go:2501] "Starting kubelet main sync loop" Apr 28 00:12:55.153396 kubelet[2190]: E0428 00:12:55.153374 2190 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:12:55.158616 kubelet[2190]: I0428 00:12:55.158590 2190 cpu_manager.go:225] "Starting" policy="none" Apr 28 00:12:55.159260 kubelet[2190]: I0428 00:12:55.158996 2190 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 28 00:12:55.159260 kubelet[2190]: I0428 00:12:55.159023 2190 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 28 00:12:55.162255 kubelet[2190]: I0428 00:12:55.162232 2190 policy_none.go:50] "Start" Apr 28 00:12:55.162371 kubelet[2190]: I0428 00:12:55.162358 2190 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:12:55.162437 kubelet[2190]: I0428 00:12:55.162426 2190 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:12:55.164423 kubelet[2190]: I0428 00:12:55.163823 2190 policy_none.go:44] "Start" Apr 28 00:12:55.169092 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:12:55.185502 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:12:55.190273 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:12:55.205770 kubelet[2190]: E0428 00:12:55.205424 2190 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:12:55.205770 kubelet[2190]: I0428 00:12:55.205768 2190 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 28 00:12:55.205959 kubelet[2190]: I0428 00:12:55.205787 2190 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:12:55.206409 kubelet[2190]: I0428 00:12:55.206313 2190 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 28 00:12:55.208944 kubelet[2190]: E0428 00:12:55.208878 2190 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:12:55.208944 kubelet[2190]: E0428 00:12:55.208918 2190 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:55.267355 systemd[1]: Created slice kubepods-burstable-pod186ed3205f45f1d8fa54b55ea0789eaa.slice - libcontainer container kubepods-burstable-pod186ed3205f45f1d8fa54b55ea0789eaa.slice. Apr 28 00:12:55.284437 kubelet[2190]: E0428 00:12:55.284384 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.290186 systemd[1]: Created slice kubepods-burstable-pod7eafc7c9d1f5744dd6622a24cbea8850.slice - libcontainer container kubepods-burstable-pod7eafc7c9d1f5744dd6622a24cbea8850.slice. Apr 28 00:12:55.304746 kubelet[2190]: E0428 00:12:55.303313 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.307476 systemd[1]: Created slice kubepods-burstable-pod519228a872f03a19877b5dca82f1770e.slice - libcontainer container kubepods-burstable-pod519228a872f03a19877b5dca82f1770e.slice. Apr 28 00:12:55.309578 kubelet[2190]: I0428 00:12:55.309548 2190 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.310099 kubelet[2190]: E0428 00:12:55.310074 2190 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.25.61:6443/api/v1/nodes\": dial tcp 178.105.25.61:6443: connect: connection refused" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.311362 kubelet[2190]: E0428 00:12:55.311344 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.330657 kubelet[2190]: E0428 00:12:55.330449 2190 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": dial tcp 178.105.25.61:6443: connect: connection refused" interval="400ms" Apr 28 00:12:55.427580 kubelet[2190]: I0428 00:12:55.427006 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/519228a872f03a19877b5dca82f1770e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-n-651e172f95\" (UID: \"519228a872f03a19877b5dca82f1770e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.427580 kubelet[2190]: I0428 00:12:55.427075 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.427580 kubelet[2190]: I0428 00:12:55.427113 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.427580 kubelet[2190]: I0428 00:12:55.427150 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.427580 kubelet[2190]: I0428 00:12:55.427207 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.428246 kubelet[2190]: I0428 00:12:55.427250 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.428246 kubelet[2190]: I0428 00:12:55.427283 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.428246 kubelet[2190]: I0428 00:12:55.427315 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.428246 kubelet[2190]: I0428 00:12:55.427359 2190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.513347 kubelet[2190]: I0428 00:12:55.513300 2190 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.513864 kubelet[2190]: E0428 00:12:55.513826 2190 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.25.61:6443/api/v1/nodes\": dial tcp 178.105.25.61:6443: connect: connection refused" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.588578 containerd[1486]: time="2026-04-28T00:12:55.588169247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-n-651e172f95,Uid:186ed3205f45f1d8fa54b55ea0789eaa,Namespace:kube-system,Attempt:0,}" Apr 28 00:12:55.607150 containerd[1486]: time="2026-04-28T00:12:55.607090920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-n-651e172f95,Uid:7eafc7c9d1f5744dd6622a24cbea8850,Namespace:kube-system,Attempt:0,}" Apr 28 00:12:55.617666 containerd[1486]: time="2026-04-28T00:12:55.617448201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-n-651e172f95,Uid:519228a872f03a19877b5dca82f1770e,Namespace:kube-system,Attempt:0,}" Apr 28 00:12:55.731684 kubelet[2190]: E0428 00:12:55.731616 2190 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": dial tcp 178.105.25.61:6443: connect: connection refused" interval="800ms" Apr 28 00:12:55.916599 kubelet[2190]: I0428 00:12:55.916496 2190 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:55.917117 kubelet[2190]: E0428 00:12:55.917039 2190 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.105.25.61:6443/api/v1/nodes\": dial tcp 178.105.25.61:6443: connect: connection refused" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:56.029937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850847459.mount: Deactivated successfully. Apr 28 00:12:56.036523 containerd[1486]: time="2026-04-28T00:12:56.036463434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:12:56.040009 containerd[1486]: time="2026-04-28T00:12:56.039943792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 28 00:12:56.040797 containerd[1486]: time="2026-04-28T00:12:56.040745486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:12:56.042689 containerd[1486]: time="2026-04-28T00:12:56.042586925Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:12:56.044208 containerd[1486]: time="2026-04-28T00:12:56.044158071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:12:56.045231 containerd[1486]: time="2026-04-28T00:12:56.045165773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:12:56.046786 containerd[1486]: time="2026-04-28T00:12:56.046361838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:12:56.050580 containerd[1486]: time="2026-04-28T00:12:56.050528314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:12:56.053185 containerd[1486]: time="2026-04-28T00:12:56.053132668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 445.912013ms" Apr 28 00:12:56.054182 containerd[1486]: time="2026-04-28T00:12:56.054146947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.890809ms" Apr 28 00:12:56.055567 containerd[1486]: time="2026-04-28T00:12:56.055537911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 437.99985ms" Apr 28 00:12:56.184909 containerd[1486]: time="2026-04-28T00:12:56.184266134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:12:56.184909 containerd[1486]: time="2026-04-28T00:12:56.184320714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:12:56.184909 containerd[1486]: time="2026-04-28T00:12:56.184343412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.187877 containerd[1486]: time="2026-04-28T00:12:56.186023477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:12:56.187877 containerd[1486]: time="2026-04-28T00:12:56.186081386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:12:56.187877 containerd[1486]: time="2026-04-28T00:12:56.186096905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.187877 containerd[1486]: time="2026-04-28T00:12:56.186196521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.187877 containerd[1486]: time="2026-04-28T00:12:56.186227681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.194959 containerd[1486]: time="2026-04-28T00:12:56.194885105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:12:56.195299 containerd[1486]: time="2026-04-28T00:12:56.195142244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:12:56.195299 containerd[1486]: time="2026-04-28T00:12:56.195194137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.195505 containerd[1486]: time="2026-04-28T00:12:56.195457772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:12:56.219846 systemd[1]: Started cri-containerd-4c0f2735bab5e64f7a7e81653533ef3e11b2aef07ef6398a63d436967d7e52a5.scope - libcontainer container 4c0f2735bab5e64f7a7e81653533ef3e11b2aef07ef6398a63d436967d7e52a5. Apr 28 00:12:56.224893 systemd[1]: Started cri-containerd-230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b.scope - libcontainer container 230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b. Apr 28 00:12:56.226494 systemd[1]: Started cri-containerd-882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba.scope - libcontainer container 882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba. Apr 28 00:12:56.293409 containerd[1486]: time="2026-04-28T00:12:56.292970165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-n-651e172f95,Uid:186ed3205f45f1d8fa54b55ea0789eaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0f2735bab5e64f7a7e81653533ef3e11b2aef07ef6398a63d436967d7e52a5\"" Apr 28 00:12:56.308374 containerd[1486]: time="2026-04-28T00:12:56.308302975Z" level=info msg="CreateContainer within sandbox \"4c0f2735bab5e64f7a7e81653533ef3e11b2aef07ef6398a63d436967d7e52a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:12:56.308807 containerd[1486]: time="2026-04-28T00:12:56.308776108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-n-651e172f95,Uid:519228a872f03a19877b5dca82f1770e,Namespace:kube-system,Attempt:0,} returns sandbox id \"882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba\"" Apr 28 00:12:56.310418 containerd[1486]: time="2026-04-28T00:12:56.309912379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-n-651e172f95,Uid:7eafc7c9d1f5744dd6622a24cbea8850,Namespace:kube-system,Attempt:0,} returns sandbox id \"230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b\"" Apr 28 00:12:56.312073 kubelet[2190]: E0428 00:12:56.311964 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.25.61:6443/api/v1/namespaces/default/events\": dial tcp 178.105.25.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-n-651e172f95.18aa5ceccf54f080 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-n-651e172f95,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-n-651e172f95,},FirstTimestamp:2026-04-28 00:12:55.111569536 +0000 UTC m=+0.594277083,LastTimestamp:2026-04-28 00:12:55.111569536 +0000 UTC m=+0.594277083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-n-651e172f95,}" Apr 28 00:12:56.328853 containerd[1486]: time="2026-04-28T00:12:56.328808480Z" level=info msg="CreateContainer within sandbox \"4c0f2735bab5e64f7a7e81653533ef3e11b2aef07ef6398a63d436967d7e52a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0976f3b23b1aab48b4ede52f71418b949c4ef4828e02683d6a67a43b51707a6\"" Apr 28 00:12:56.329925 containerd[1486]: time="2026-04-28T00:12:56.329894142Z" level=info msg="StartContainer for \"a0976f3b23b1aab48b4ede52f71418b949c4ef4828e02683d6a67a43b51707a6\"" Apr 28 00:12:56.335717 containerd[1486]: time="2026-04-28T00:12:56.335469830Z" level=info msg="CreateContainer within sandbox \"882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:12:56.336124 containerd[1486]: time="2026-04-28T00:12:56.335936546Z" level=info msg="CreateContainer within sandbox \"230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:12:56.363714 containerd[1486]: time="2026-04-28T00:12:56.363614670Z" level=info msg="CreateContainer within sandbox \"230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca\"" Apr 28 00:12:56.364439 containerd[1486]: time="2026-04-28T00:12:56.364411672Z" level=info msg="CreateContainer within sandbox \"882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9\"" Apr 28 00:12:56.365588 containerd[1486]: time="2026-04-28T00:12:56.365304480Z" level=info msg="StartContainer for \"47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9\"" Apr 28 00:12:56.365865 containerd[1486]: time="2026-04-28T00:12:56.365780500Z" level=info msg="StartContainer for \"9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca\"" Apr 28 00:12:56.370787 systemd[1]: Started cri-containerd-a0976f3b23b1aab48b4ede52f71418b949c4ef4828e02683d6a67a43b51707a6.scope - libcontainer container a0976f3b23b1aab48b4ede52f71418b949c4ef4828e02683d6a67a43b51707a6. Apr 28 00:12:56.411848 systemd[1]: Started cri-containerd-9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca.scope - libcontainer container 9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca. Apr 28 00:12:56.420854 systemd[1]: Started cri-containerd-47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9.scope - libcontainer container 47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9. Apr 28 00:12:56.469840 containerd[1486]: time="2026-04-28T00:12:56.469054577Z" level=info msg="StartContainer for \"a0976f3b23b1aab48b4ede52f71418b949c4ef4828e02683d6a67a43b51707a6\" returns successfully" Apr 28 00:12:56.531013 containerd[1486]: time="2026-04-28T00:12:56.530956198Z" level=info msg="StartContainer for \"47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9\" returns successfully" Apr 28 00:12:56.531576 containerd[1486]: time="2026-04-28T00:12:56.530979177Z" level=info msg="StartContainer for \"9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca\" returns successfully" Apr 28 00:12:56.532754 kubelet[2190]: E0428 00:12:56.532713 2190 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": dial tcp 178.105.25.61:6443: connect: connection refused" interval="1.6s" Apr 28 00:12:56.720762 kubelet[2190]: I0428 00:12:56.719723 2190 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:57.169718 kubelet[2190]: E0428 00:12:57.169626 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:57.174547 kubelet[2190]: E0428 00:12:57.174315 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:57.177673 kubelet[2190]: E0428 00:12:57.176309 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:58.072255 kubelet[2190]: I0428 00:12:58.072205 2190 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:58.072255 kubelet[2190]: E0428 00:12:58.072257 2190 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-n-651e172f95\": node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.092269 kubelet[2190]: E0428 00:12:58.092234 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.179740 kubelet[2190]: E0428 00:12:58.179689 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:58.180641 kubelet[2190]: E0428 00:12:58.180253 2190 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-n-651e172f95\" not found" node="ci-4081-3-7-n-651e172f95" Apr 28 00:12:58.192505 kubelet[2190]: E0428 00:12:58.192435 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.294116 kubelet[2190]: E0428 00:12:58.292765 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.393582 kubelet[2190]: E0428 00:12:58.393518 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.494767 kubelet[2190]: E0428 00:12:58.494625 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.595294 kubelet[2190]: E0428 00:12:58.595244 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.696324 kubelet[2190]: E0428 00:12:58.696151 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.796612 kubelet[2190]: E0428 00:12:58.796523 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.896806 kubelet[2190]: E0428 00:12:58.896646 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:58.996869 kubelet[2190]: E0428 00:12:58.996752 2190 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:12:59.109514 kubelet[2190]: I0428 00:12:59.109239 2190 apiserver.go:52] "Watching apiserver" Apr 28 00:12:59.126835 kubelet[2190]: I0428 00:12:59.126406 2190 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:12:59.126835 kubelet[2190]: I0428 00:12:59.126433 2190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:12:59.142231 kubelet[2190]: I0428 00:12:59.141885 2190 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:12:59.149686 kubelet[2190]: I0428 00:12:59.149055 2190 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:00.477015 systemd[1]: Reloading requested from client PID 2477 ('systemctl') (unit session-7.scope)... Apr 28 00:13:00.477328 systemd[1]: Reloading... Apr 28 00:13:00.569721 zram_generator::config[2520]: No configuration found. Apr 28 00:13:00.652303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:13:00.754217 systemd[1]: Reloading finished in 276 ms. Apr 28 00:13:00.789871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:13:00.810893 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:13:00.812727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:13:00.820155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:13:00.955870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:13:00.964046 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:13:01.025803 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:13:01.035894 kubelet[2562]: I0428 00:13:01.035817 2562 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 28 00:13:01.035894 kubelet[2562]: I0428 00:13:01.035869 2562 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:13:01.035894 kubelet[2562]: I0428 00:13:01.035892 2562 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:13:01.035894 kubelet[2562]: I0428 00:13:01.035898 2562 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:13:01.036191 kubelet[2562]: I0428 00:13:01.036181 2562 server.go:951] "Client rotation is on, will bootstrap in background" Apr 28 00:13:01.037700 kubelet[2562]: I0428 00:13:01.037653 2562 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 00:13:01.040361 kubelet[2562]: I0428 00:13:01.039907 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:13:01.043529 kubelet[2562]: E0428 00:13:01.043496 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:13:01.043811 kubelet[2562]: I0428 00:13:01.043768 2562 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:13:01.047942 kubelet[2562]: I0428 00:13:01.047901 2562 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:13:01.048171 kubelet[2562]: I0428 00:13:01.048137 2562 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:13:01.048372 kubelet[2562]: I0428 00:13:01.048173 2562 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-n-651e172f95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:13:01.048372 kubelet[2562]: I0428 00:13:01.048347 2562 topology_manager.go:143] "Creating topology manager with none policy" Apr 28 00:13:01.048372 kubelet[2562]: I0428 00:13:01.048355 2562 container_manager_linux.go:308] "Creating device plugin manager" Apr 28 00:13:01.048634 kubelet[2562]: I0428 00:13:01.048385 2562 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:13:01.048634 kubelet[2562]: I0428 00:13:01.048627 2562 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 28 00:13:01.049007 kubelet[2562]: I0428 00:13:01.048882 2562 kubelet.go:482] "Attempting to sync node with API server" Apr 28 00:13:01.049007 kubelet[2562]: I0428 00:13:01.048897 2562 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:13:01.049007 kubelet[2562]: I0428 00:13:01.048914 2562 kubelet.go:394] "Adding apiserver pod source" Apr 28 00:13:01.049007 kubelet[2562]: I0428 00:13:01.048924 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:13:01.059580 kubelet[2562]: I0428 00:13:01.058627 2562 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:13:01.059813 kubelet[2562]: I0428 00:13:01.059587 2562 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:13:01.059813 kubelet[2562]: I0428 00:13:01.059617 2562 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:13:01.065367 kubelet[2562]: I0428 00:13:01.064515 2562 server.go:1257] "Started kubelet" Apr 28 00:13:01.076942 kubelet[2562]: I0428 00:13:01.066144 2562 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:13:01.077066 kubelet[2562]: I0428 00:13:01.076978 2562 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:13:01.077681 kubelet[2562]: I0428 00:13:01.077195 2562 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:13:01.077681 kubelet[2562]: I0428 00:13:01.068648 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:13:01.080106 kubelet[2562]: I0428 00:13:01.068478 2562 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 28 00:13:01.093835 kubelet[2562]: I0428 00:13:01.068496 2562 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:13:01.095011 kubelet[2562]: I0428 00:13:01.094990 2562 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:13:01.099010 kubelet[2562]: I0428 00:13:01.098985 2562 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 28 00:13:01.099225 kubelet[2562]: E0428 00:13:01.099194 2562 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-n-651e172f95\" not found" Apr 28 00:13:01.099404 kubelet[2562]: I0428 00:13:01.099388 2562 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:13:01.099516 kubelet[2562]: I0428 00:13:01.099503 2562 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:13:01.106932 kubelet[2562]: I0428 00:13:01.106897 2562 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:13:01.106932 kubelet[2562]: I0428 00:13:01.106942 2562 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:13:01.107069 kubelet[2562]: I0428 00:13:01.107027 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:13:01.108035 kubelet[2562]: I0428 00:13:01.107962 2562 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:13:01.109018 kubelet[2562]: I0428 00:13:01.108991 2562 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:13:01.109018 kubelet[2562]: I0428 00:13:01.109015 2562 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 28 00:13:01.109108 kubelet[2562]: I0428 00:13:01.109037 2562 kubelet.go:2501] "Starting kubelet main sync loop" Apr 28 00:13:01.109108 kubelet[2562]: E0428 00:13:01.109070 2562 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:13:01.162614 kubelet[2562]: I0428 00:13:01.162587 2562 cpu_manager.go:225] "Starting" policy="none" Apr 28 00:13:01.162614 kubelet[2562]: I0428 00:13:01.162606 2562 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 28 00:13:01.162815 kubelet[2562]: I0428 00:13:01.162629 2562 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163082 2562 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163104 2562 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163122 2562 policy_none.go:50] "Start" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163130 2562 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163138 2562 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163231 2562 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 00:13:01.163778 kubelet[2562]: I0428 00:13:01.163243 2562 policy_none.go:44] "Start" Apr 28 00:13:01.169800 kubelet[2562]: E0428 00:13:01.169752 2562 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:13:01.170025 kubelet[2562]: I0428 00:13:01.169980 2562 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 28 00:13:01.170063 kubelet[2562]: I0428 00:13:01.170003 2562 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:13:01.172104 kubelet[2562]: I0428 00:13:01.172049 2562 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 28 00:13:01.173110 kubelet[2562]: E0428 00:13:01.173077 2562 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:13:01.210577 kubelet[2562]: I0428 00:13:01.210446 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.210577 kubelet[2562]: I0428 00:13:01.210470 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.211413 kubelet[2562]: I0428 00:13:01.211385 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.220441 kubelet[2562]: E0428 00:13:01.220300 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.221784 kubelet[2562]: E0428 00:13:01.221708 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.221784 kubelet[2562]: E0428 00:13:01.221744 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.273378 kubelet[2562]: I0428 00:13:01.273349 2562 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.295218 kubelet[2562]: I0428 00:13:01.294808 2562 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.295218 kubelet[2562]: I0428 00:13:01.295019 2562 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302146 kubelet[2562]: I0428 00:13:01.301811 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302146 kubelet[2562]: I0428 00:13:01.301857 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302146 kubelet[2562]: I0428 00:13:01.301900 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302146 kubelet[2562]: I0428 00:13:01.301921 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302146 kubelet[2562]: I0428 00:13:01.301939 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/519228a872f03a19877b5dca82f1770e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-n-651e172f95\" (UID: \"519228a872f03a19877b5dca82f1770e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302461 kubelet[2562]: I0428 00:13:01.301958 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302461 kubelet[2562]: I0428 00:13:01.301977 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302461 kubelet[2562]: I0428 00:13:01.302040 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7eafc7c9d1f5744dd6622a24cbea8850-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" (UID: \"7eafc7c9d1f5744dd6622a24cbea8850\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:01.302461 kubelet[2562]: I0428 00:13:01.302059 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/186ed3205f45f1d8fa54b55ea0789eaa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-n-651e172f95\" (UID: \"186ed3205f45f1d8fa54b55ea0789eaa\") " pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.058755 kubelet[2562]: I0428 00:13:02.058653 2562 apiserver.go:52] "Watching apiserver" Apr 28 00:13:02.100164 kubelet[2562]: I0428 00:13:02.100099 2562 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:13:02.142715 kubelet[2562]: I0428 00:13:02.138548 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.142715 kubelet[2562]: I0428 00:13:02.138923 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.142715 kubelet[2562]: I0428 00:13:02.139138 2562 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.157823 kubelet[2562]: E0428 00:13:02.157781 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.159465 kubelet[2562]: E0428 00:13:02.158493 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.161916 kubelet[2562]: E0428 00:13:02.161648 2562 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-n-651e172f95\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" Apr 28 00:13:02.185679 kubelet[2562]: I0428 00:13:02.184828 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-n-651e172f95" podStartSLOduration=3.184785792 podStartE2EDuration="3.184785792s" podCreationTimestamp="2026-04-28 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:02.184764527 +0000 UTC m=+1.214585778" watchObservedRunningTime="2026-04-28 00:13:02.184785792 +0000 UTC m=+1.214607043" Apr 28 00:13:02.197567 kubelet[2562]: I0428 00:13:02.197487 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-n-651e172f95" podStartSLOduration=3.197469198 podStartE2EDuration="3.197469198s" podCreationTimestamp="2026-04-28 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:02.19743612 +0000 UTC m=+1.227257371" watchObservedRunningTime="2026-04-28 00:13:02.197469198 +0000 UTC m=+1.227290490" Apr 28 00:13:02.210106 kubelet[2562]: I0428 00:13:02.210044 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-n-651e172f95" podStartSLOduration=3.2100143660000002 podStartE2EDuration="3.210014366s" podCreationTimestamp="2026-04-28 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:02.209628522 +0000 UTC m=+1.239449773" watchObservedRunningTime="2026-04-28 00:13:02.210014366 +0000 UTC m=+1.239835577" Apr 28 00:13:02.854490 update_engine[1472]: I20260428 00:13:02.854374 1472 update_attempter.cc:509] Updating boot flags... Apr 28 00:13:02.904770 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2618) Apr 28 00:13:02.969510 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2621) Apr 28 00:13:05.696389 kubelet[2562]: I0428 00:13:05.696010 2562 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 00:13:05.698062 containerd[1486]: time="2026-04-28T00:13:05.697704207Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 00:13:05.699209 kubelet[2562]: I0428 00:13:05.698043 2562 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 00:13:06.525308 systemd[1]: Created slice kubepods-besteffort-pod949bdf1e_387a_4ae3_9d80_045b3fbab412.slice - libcontainer container kubepods-besteffort-pod949bdf1e_387a_4ae3_9d80_045b3fbab412.slice. Apr 28 00:13:06.535609 kubelet[2562]: I0428 00:13:06.534267 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949bdf1e-387a-4ae3-9d80-045b3fbab412-xtables-lock\") pod \"kube-proxy-tgbzq\" (UID: \"949bdf1e-387a-4ae3-9d80-045b3fbab412\") " pod="kube-system/kube-proxy-tgbzq" Apr 28 00:13:06.535609 kubelet[2562]: I0428 00:13:06.534311 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/949bdf1e-387a-4ae3-9d80-045b3fbab412-kube-proxy\") pod \"kube-proxy-tgbzq\" (UID: \"949bdf1e-387a-4ae3-9d80-045b3fbab412\") " pod="kube-system/kube-proxy-tgbzq" Apr 28 00:13:06.535609 kubelet[2562]: I0428 00:13:06.534332 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949bdf1e-387a-4ae3-9d80-045b3fbab412-lib-modules\") pod \"kube-proxy-tgbzq\" (UID: \"949bdf1e-387a-4ae3-9d80-045b3fbab412\") " pod="kube-system/kube-proxy-tgbzq" Apr 28 00:13:06.535609 kubelet[2562]: I0428 00:13:06.534348 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4mc\" (UniqueName: \"kubernetes.io/projected/949bdf1e-387a-4ae3-9d80-045b3fbab412-kube-api-access-mk4mc\") pod \"kube-proxy-tgbzq\" (UID: \"949bdf1e-387a-4ae3-9d80-045b3fbab412\") " pod="kube-system/kube-proxy-tgbzq" Apr 28 00:13:06.654372 kubelet[2562]: E0428 00:13:06.654308 2562 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 28 00:13:06.654372 kubelet[2562]: E0428 00:13:06.654354 2562 projected.go:196] Error preparing data for projected volume kube-api-access-mk4mc for pod kube-system/kube-proxy-tgbzq: configmap "kube-root-ca.crt" not found Apr 28 00:13:06.654535 kubelet[2562]: E0428 00:13:06.654470 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/949bdf1e-387a-4ae3-9d80-045b3fbab412-kube-api-access-mk4mc podName:949bdf1e-387a-4ae3-9d80-045b3fbab412 nodeName:}" failed. No retries permitted until 2026-04-28 00:13:07.154425945 +0000 UTC m=+6.184247236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mk4mc" (UniqueName: "kubernetes.io/projected/949bdf1e-387a-4ae3-9d80-045b3fbab412-kube-api-access-mk4mc") pod "kube-proxy-tgbzq" (UID: "949bdf1e-387a-4ae3-9d80-045b3fbab412") : configmap "kube-root-ca.crt" not found Apr 28 00:13:06.945088 systemd[1]: Created slice kubepods-besteffort-poda1e64b4e_773d_46ec_a20b_d17d09c0a19d.slice - libcontainer container kubepods-besteffort-poda1e64b4e_773d_46ec_a20b_d17d09c0a19d.slice. Apr 28 00:13:07.037951 kubelet[2562]: I0428 00:13:07.037768 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1e64b4e-773d-46ec-a20b-d17d09c0a19d-var-lib-calico\") pod \"tigera-operator-687949b757-zc7vd\" (UID: \"a1e64b4e-773d-46ec-a20b-d17d09c0a19d\") " pod="tigera-operator/tigera-operator-687949b757-zc7vd" Apr 28 00:13:07.037951 kubelet[2562]: I0428 00:13:07.037839 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8v2s\" (UniqueName: \"kubernetes.io/projected/a1e64b4e-773d-46ec-a20b-d17d09c0a19d-kube-api-access-h8v2s\") pod \"tigera-operator-687949b757-zc7vd\" (UID: \"a1e64b4e-773d-46ec-a20b-d17d09c0a19d\") " pod="tigera-operator/tigera-operator-687949b757-zc7vd" Apr 28 00:13:07.258313 containerd[1486]: time="2026-04-28T00:13:07.257319530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-zc7vd,Uid:a1e64b4e-773d-46ec-a20b-d17d09c0a19d,Namespace:tigera-operator,Attempt:0,}" Apr 28 00:13:07.282304 containerd[1486]: time="2026-04-28T00:13:07.282102257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:07.282304 containerd[1486]: time="2026-04-28T00:13:07.282204477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:07.282304 containerd[1486]: time="2026-04-28T00:13:07.282229332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:07.282943 containerd[1486]: time="2026-04-28T00:13:07.282860985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:07.305017 systemd[1]: Started cri-containerd-ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e.scope - libcontainer container ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e. Apr 28 00:13:07.346328 containerd[1486]: time="2026-04-28T00:13:07.346079989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-zc7vd,Uid:a1e64b4e-773d-46ec-a20b-d17d09c0a19d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e\"" Apr 28 00:13:07.349587 containerd[1486]: time="2026-04-28T00:13:07.349504974Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 28 00:13:07.443504 containerd[1486]: time="2026-04-28T00:13:07.443390022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tgbzq,Uid:949bdf1e-387a-4ae3-9d80-045b3fbab412,Namespace:kube-system,Attempt:0,}" Apr 28 00:13:07.471716 containerd[1486]: time="2026-04-28T00:13:07.471273422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:07.471716 containerd[1486]: time="2026-04-28T00:13:07.471345904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:07.471716 containerd[1486]: time="2026-04-28T00:13:07.471363475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:07.471716 containerd[1486]: time="2026-04-28T00:13:07.471463454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:07.493119 systemd[1]: Started cri-containerd-7fe47a8a3ffa64f7da730f7be2e7d8f7df945509ffc9c3ed5368b74c2a7412e7.scope - libcontainer container 7fe47a8a3ffa64f7da730f7be2e7d8f7df945509ffc9c3ed5368b74c2a7412e7. Apr 28 00:13:07.522161 containerd[1486]: time="2026-04-28T00:13:07.520517046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tgbzq,Uid:949bdf1e-387a-4ae3-9d80-045b3fbab412,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fe47a8a3ffa64f7da730f7be2e7d8f7df945509ffc9c3ed5368b74c2a7412e7\"" Apr 28 00:13:07.531707 containerd[1486]: time="2026-04-28T00:13:07.531634777Z" level=info msg="CreateContainer within sandbox \"7fe47a8a3ffa64f7da730f7be2e7d8f7df945509ffc9c3ed5368b74c2a7412e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 00:13:07.548392 containerd[1486]: time="2026-04-28T00:13:07.548307871Z" level=info msg="CreateContainer within sandbox \"7fe47a8a3ffa64f7da730f7be2e7d8f7df945509ffc9c3ed5368b74c2a7412e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f50e9cd5b9d80348636557b34bfec1f960b897ccc068717eb682bffdd4ba4163\"" Apr 28 00:13:07.551544 containerd[1486]: time="2026-04-28T00:13:07.549522029Z" level=info msg="StartContainer for \"f50e9cd5b9d80348636557b34bfec1f960b897ccc068717eb682bffdd4ba4163\"" Apr 28 00:13:07.579956 systemd[1]: Started cri-containerd-f50e9cd5b9d80348636557b34bfec1f960b897ccc068717eb682bffdd4ba4163.scope - libcontainer container f50e9cd5b9d80348636557b34bfec1f960b897ccc068717eb682bffdd4ba4163. Apr 28 00:13:07.617183 containerd[1486]: time="2026-04-28T00:13:07.617083679Z" level=info msg="StartContainer for \"f50e9cd5b9d80348636557b34bfec1f960b897ccc068717eb682bffdd4ba4163\" returns successfully" Apr 28 00:13:08.177706 kubelet[2562]: I0428 00:13:08.177295 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tgbzq" podStartSLOduration=2.177281208 podStartE2EDuration="2.177281208s" podCreationTimestamp="2026-04-28 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:08.176943834 +0000 UTC m=+7.206765125" watchObservedRunningTime="2026-04-28 00:13:08.177281208 +0000 UTC m=+7.207102459" Apr 28 00:13:09.072868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52142392.mount: Deactivated successfully. Apr 28 00:13:12.941156 containerd[1486]: time="2026-04-28T00:13:12.941091674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:12.942548 containerd[1486]: time="2026-04-28T00:13:12.942503502Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=24868969" Apr 28 00:13:12.943503 containerd[1486]: time="2026-04-28T00:13:12.943227282Z" level=info msg="ImageCreate event name:\"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:12.945905 containerd[1486]: time="2026-04-28T00:13:12.945857319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:12.946982 containerd[1486]: time="2026-04-28T00:13:12.946938847Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"24864964\" in 5.597386889s" Apr 28 00:13:12.946982 containerd[1486]: time="2026-04-28T00:13:12.946979780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\"" Apr 28 00:13:12.954138 containerd[1486]: time="2026-04-28T00:13:12.953475509Z" level=info msg="CreateContainer within sandbox \"ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 28 00:13:12.975961 containerd[1486]: time="2026-04-28T00:13:12.975900669Z" level=info msg="CreateContainer within sandbox \"ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f\"" Apr 28 00:13:12.978717 containerd[1486]: time="2026-04-28T00:13:12.977014687Z" level=info msg="StartContainer for \"5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f\"" Apr 28 00:13:13.013880 systemd[1]: Started cri-containerd-5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f.scope - libcontainer container 5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f. Apr 28 00:13:13.039295 containerd[1486]: time="2026-04-28T00:13:13.039253017Z" level=info msg="StartContainer for \"5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f\" returns successfully" Apr 28 00:13:16.776155 kubelet[2562]: I0428 00:13:16.775840 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-687949b757-zc7vd" podStartSLOduration=5.17556535 podStartE2EDuration="10.775824942s" podCreationTimestamp="2026-04-28 00:13:06 +0000 UTC" firstStartedPulling="2026-04-28 00:13:07.348596477 +0000 UTC m=+6.378417728" lastFinishedPulling="2026-04-28 00:13:12.948856069 +0000 UTC m=+11.978677320" observedRunningTime="2026-04-28 00:13:13.194890073 +0000 UTC m=+12.224711364" watchObservedRunningTime="2026-04-28 00:13:16.775824942 +0000 UTC m=+15.805646153" Apr 28 00:13:19.256252 sudo[1714]: pam_unix(sudo:session): session closed for user root Apr 28 00:13:19.274861 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 28 00:13:19.280085 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 00:13:19.280403 systemd[1]: session-7.scope: Consumed 4.902s CPU time, 149.7M memory peak, 0B memory swap peak. Apr 28 00:13:19.284828 systemd[1]: sshd@6-178.105.25.61:22-50.85.169.122:42188.service: Deactivated successfully. Apr 28 00:13:19.284915 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Apr 28 00:13:19.292996 systemd-logind[1471]: Removed session 7. Apr 28 00:13:23.401600 systemd[1]: Created slice kubepods-besteffort-pod9772b8db_092d_4b31_bc9d_729c89bba451.slice - libcontainer container kubepods-besteffort-pod9772b8db_092d_4b31_bc9d_729c89bba451.slice. Apr 28 00:13:23.450004 kubelet[2562]: I0428 00:13:23.449714 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm65\" (UniqueName: \"kubernetes.io/projected/9772b8db-092d-4b31-bc9d-729c89bba451-kube-api-access-sfm65\") pod \"calico-typha-668dcbc8f4-fhhp4\" (UID: \"9772b8db-092d-4b31-bc9d-729c89bba451\") " pod="calico-system/calico-typha-668dcbc8f4-fhhp4" Apr 28 00:13:23.450004 kubelet[2562]: I0428 00:13:23.449787 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9772b8db-092d-4b31-bc9d-729c89bba451-tigera-ca-bundle\") pod \"calico-typha-668dcbc8f4-fhhp4\" (UID: \"9772b8db-092d-4b31-bc9d-729c89bba451\") " pod="calico-system/calico-typha-668dcbc8f4-fhhp4" Apr 28 00:13:23.450004 kubelet[2562]: I0428 00:13:23.449806 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9772b8db-092d-4b31-bc9d-729c89bba451-typha-certs\") pod \"calico-typha-668dcbc8f4-fhhp4\" (UID: \"9772b8db-092d-4b31-bc9d-729c89bba451\") " pod="calico-system/calico-typha-668dcbc8f4-fhhp4" Apr 28 00:13:23.513519 systemd[1]: Created slice kubepods-besteffort-podb5223732_1093_4eae_9148_d0beec8c9693.slice - libcontainer container kubepods-besteffort-podb5223732_1093_4eae_9148_d0beec8c9693.slice. Apr 28 00:13:23.550359 kubelet[2562]: I0428 00:13:23.550295 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5223732-1093-4eae-9148-d0beec8c9693-node-certs\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550359 kubelet[2562]: I0428 00:13:23.550354 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-sys-fs\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550540 kubelet[2562]: I0428 00:13:23.550389 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-var-run-calico\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550540 kubelet[2562]: I0428 00:13:23.550421 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-xtables-lock\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550540 kubelet[2562]: I0428 00:13:23.550467 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-flexvol-driver-host\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550540 kubelet[2562]: I0428 00:13:23.550494 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5223732-1093-4eae-9148-d0beec8c9693-tigera-ca-bundle\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550540 kubelet[2562]: I0428 00:13:23.550518 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-var-lib-calico\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550689 kubelet[2562]: I0428 00:13:23.550563 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-lib-modules\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550689 kubelet[2562]: I0428 00:13:23.550588 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlr2\" (UniqueName: \"kubernetes.io/projected/b5223732-1093-4eae-9148-d0beec8c9693-kube-api-access-njlr2\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550689 kubelet[2562]: I0428 00:13:23.550622 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-bpffs\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.550689 kubelet[2562]: I0428 00:13:23.550646 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-cni-bin-dir\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.551887 kubelet[2562]: I0428 00:13:23.551472 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-cni-log-dir\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.551887 kubelet[2562]: I0428 00:13:23.551529 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-cni-net-dir\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.551887 kubelet[2562]: I0428 00:13:23.551549 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-nodeproc\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.551887 kubelet[2562]: I0428 00:13:23.551583 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5223732-1093-4eae-9148-d0beec8c9693-policysync\") pod \"calico-node-c8wrg\" (UID: \"b5223732-1093-4eae-9148-d0beec8c9693\") " pod="calico-system/calico-node-c8wrg" Apr 28 00:13:23.605924 kubelet[2562]: E0428 00:13:23.605872 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:23.654729 kubelet[2562]: I0428 00:13:23.651853 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fd7a659-0a04-4180-b5ea-79c09071b35c-kubelet-dir\") pod \"csi-node-driver-687qr\" (UID: \"5fd7a659-0a04-4180-b5ea-79c09071b35c\") " pod="calico-system/csi-node-driver-687qr" Apr 28 00:13:23.654729 kubelet[2562]: I0428 00:13:23.651900 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fd7a659-0a04-4180-b5ea-79c09071b35c-socket-dir\") pod \"csi-node-driver-687qr\" (UID: \"5fd7a659-0a04-4180-b5ea-79c09071b35c\") " pod="calico-system/csi-node-driver-687qr" Apr 28 00:13:23.654729 kubelet[2562]: I0428 00:13:23.651919 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5fd7a659-0a04-4180-b5ea-79c09071b35c-varrun\") pod \"csi-node-driver-687qr\" (UID: \"5fd7a659-0a04-4180-b5ea-79c09071b35c\") " pod="calico-system/csi-node-driver-687qr" Apr 28 00:13:23.654729 kubelet[2562]: I0428 00:13:23.651935 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbw9\" (UniqueName: \"kubernetes.io/projected/5fd7a659-0a04-4180-b5ea-79c09071b35c-kube-api-access-4bbw9\") pod \"csi-node-driver-687qr\" (UID: \"5fd7a659-0a04-4180-b5ea-79c09071b35c\") " pod="calico-system/csi-node-driver-687qr" Apr 28 00:13:23.654729 kubelet[2562]: I0428 00:13:23.651963 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fd7a659-0a04-4180-b5ea-79c09071b35c-registration-dir\") pod \"csi-node-driver-687qr\" (UID: \"5fd7a659-0a04-4180-b5ea-79c09071b35c\") " pod="calico-system/csi-node-driver-687qr" Apr 28 00:13:23.658713 kubelet[2562]: E0428 00:13:23.656399 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.658983 kubelet[2562]: W0428 00:13:23.658918 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.659038 kubelet[2562]: E0428 00:13:23.658995 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.661770 kubelet[2562]: E0428 00:13:23.659533 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.661770 kubelet[2562]: W0428 00:13:23.659561 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.661770 kubelet[2562]: E0428 00:13:23.659576 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.661770 kubelet[2562]: E0428 00:13:23.660469 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.661770 kubelet[2562]: W0428 00:13:23.660488 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.661770 kubelet[2562]: E0428 00:13:23.660506 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.662510 kubelet[2562]: E0428 00:13:23.662387 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.662510 kubelet[2562]: W0428 00:13:23.662413 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.662510 kubelet[2562]: E0428 00:13:23.662432 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.663959 kubelet[2562]: E0428 00:13:23.663438 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.663959 kubelet[2562]: W0428 00:13:23.663461 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.663959 kubelet[2562]: E0428 00:13:23.663491 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.664437 kubelet[2562]: E0428 00:13:23.664283 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.664437 kubelet[2562]: W0428 00:13:23.664303 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.664437 kubelet[2562]: E0428 00:13:23.664316 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.664921 kubelet[2562]: E0428 00:13:23.664890 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.664921 kubelet[2562]: W0428 00:13:23.664911 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.665074 kubelet[2562]: E0428 00:13:23.664924 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.665200 kubelet[2562]: E0428 00:13:23.665178 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.665200 kubelet[2562]: W0428 00:13:23.665194 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665203 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665430 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.666247 kubelet[2562]: W0428 00:13:23.665439 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665449 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665606 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.666247 kubelet[2562]: W0428 00:13:23.665614 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665622 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665805 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.666247 kubelet[2562]: W0428 00:13:23.665814 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666247 kubelet[2562]: E0428 00:13:23.665822 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.666495 kubelet[2562]: E0428 00:13:23.665946 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.666495 kubelet[2562]: W0428 00:13:23.665954 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666495 kubelet[2562]: E0428 00:13:23.665961 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.666495 kubelet[2562]: E0428 00:13:23.666124 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.666495 kubelet[2562]: W0428 00:13:23.666132 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.666495 kubelet[2562]: E0428 00:13:23.666140 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.691762 kubelet[2562]: E0428 00:13:23.691194 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.691762 kubelet[2562]: W0428 00:13:23.691229 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.691762 kubelet[2562]: E0428 00:13:23.691261 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.714441 containerd[1486]: time="2026-04-28T00:13:23.712366252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668dcbc8f4-fhhp4,Uid:9772b8db-092d-4b31-bc9d-729c89bba451,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:23.745169 containerd[1486]: time="2026-04-28T00:13:23.744696023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:23.745169 containerd[1486]: time="2026-04-28T00:13:23.744825527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:23.745169 containerd[1486]: time="2026-04-28T00:13:23.744858813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:23.745169 containerd[1486]: time="2026-04-28T00:13:23.744983437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:23.752885 kubelet[2562]: E0428 00:13:23.752752 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.752885 kubelet[2562]: W0428 00:13:23.752778 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.752885 kubelet[2562]: E0428 00:13:23.752818 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.754271 kubelet[2562]: E0428 00:13:23.753836 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.754271 kubelet[2562]: W0428 00:13:23.753858 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.754271 kubelet[2562]: E0428 00:13:23.753887 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.754271 kubelet[2562]: E0428 00:13:23.754203 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.754271 kubelet[2562]: W0428 00:13:23.754222 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.754271 kubelet[2562]: E0428 00:13:23.754248 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.755454 kubelet[2562]: E0428 00:13:23.755072 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.755454 kubelet[2562]: W0428 00:13:23.755089 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.755454 kubelet[2562]: E0428 00:13:23.755102 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.755966 kubelet[2562]: E0428 00:13:23.755779 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.755966 kubelet[2562]: W0428 00:13:23.755793 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.755966 kubelet[2562]: E0428 00:13:23.755825 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.756510 kubelet[2562]: E0428 00:13:23.756374 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.756510 kubelet[2562]: W0428 00:13:23.756397 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.756510 kubelet[2562]: E0428 00:13:23.756409 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.756763 kubelet[2562]: E0428 00:13:23.756644 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.756763 kubelet[2562]: W0428 00:13:23.756654 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.756763 kubelet[2562]: E0428 00:13:23.756683 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.757125 kubelet[2562]: E0428 00:13:23.757110 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.757218 kubelet[2562]: W0428 00:13:23.757207 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.757299 kubelet[2562]: E0428 00:13:23.757265 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.758119 kubelet[2562]: E0428 00:13:23.757843 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.758119 kubelet[2562]: W0428 00:13:23.757856 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.758119 kubelet[2562]: E0428 00:13:23.757868 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.759605 kubelet[2562]: E0428 00:13:23.759399 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.759605 kubelet[2562]: W0428 00:13:23.759420 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.759605 kubelet[2562]: E0428 00:13:23.759447 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.762239 kubelet[2562]: E0428 00:13:23.762209 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.764519 kubelet[2562]: W0428 00:13:23.764215 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.764519 kubelet[2562]: E0428 00:13:23.764245 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.765928 kubelet[2562]: E0428 00:13:23.765194 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.766746 kubelet[2562]: W0428 00:13:23.765211 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.766841 kubelet[2562]: E0428 00:13:23.766762 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.767817 kubelet[2562]: E0428 00:13:23.767779 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.767817 kubelet[2562]: W0428 00:13:23.767806 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.767933 kubelet[2562]: E0428 00:13:23.767828 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.768446 kubelet[2562]: E0428 00:13:23.768147 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.768446 kubelet[2562]: W0428 00:13:23.768175 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.768446 kubelet[2562]: E0428 00:13:23.768190 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.768446 kubelet[2562]: E0428 00:13:23.768440 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.768446 kubelet[2562]: W0428 00:13:23.768451 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.768649 kubelet[2562]: E0428 00:13:23.768461 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.768649 kubelet[2562]: E0428 00:13:23.768631 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.768649 kubelet[2562]: W0428 00:13:23.768640 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769189 kubelet[2562]: E0428 00:13:23.768675 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.769189 kubelet[2562]: E0428 00:13:23.768840 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.769189 kubelet[2562]: W0428 00:13:23.768849 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769189 kubelet[2562]: E0428 00:13:23.768857 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.769189 kubelet[2562]: E0428 00:13:23.769063 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.769189 kubelet[2562]: W0428 00:13:23.769071 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769189 kubelet[2562]: E0428 00:13:23.769079 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.769420 kubelet[2562]: E0428 00:13:23.769326 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.769420 kubelet[2562]: W0428 00:13:23.769337 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769420 kubelet[2562]: E0428 00:13:23.769347 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.769602 kubelet[2562]: E0428 00:13:23.769507 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.769602 kubelet[2562]: W0428 00:13:23.769515 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769602 kubelet[2562]: E0428 00:13:23.769523 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.769910 kubelet[2562]: E0428 00:13:23.769793 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.769910 kubelet[2562]: W0428 00:13:23.769803 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.769910 kubelet[2562]: E0428 00:13:23.769812 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.770148 kubelet[2562]: E0428 00:13:23.770038 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.770148 kubelet[2562]: W0428 00:13:23.770055 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.770148 kubelet[2562]: E0428 00:13:23.770066 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.770534 kubelet[2562]: E0428 00:13:23.770372 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.770534 kubelet[2562]: W0428 00:13:23.770390 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.770534 kubelet[2562]: E0428 00:13:23.770401 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.770714 kubelet[2562]: E0428 00:13:23.770630 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.770714 kubelet[2562]: W0428 00:13:23.770648 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.771712 kubelet[2562]: E0428 00:13:23.770710 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.771007 systemd[1]: Started cri-containerd-3855ab39255778b7b51fb3e77d12bc0d3112bd6dfd613fc870a68cbe8bccd535.scope - libcontainer container 3855ab39255778b7b51fb3e77d12bc0d3112bd6dfd613fc870a68cbe8bccd535. Apr 28 00:13:23.772143 kubelet[2562]: E0428 00:13:23.772081 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.772396 kubelet[2562]: W0428 00:13:23.772106 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.772771 kubelet[2562]: E0428 00:13:23.772371 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.789084 kubelet[2562]: E0428 00:13:23.788998 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:23.789084 kubelet[2562]: W0428 00:13:23.789037 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:23.789501 kubelet[2562]: E0428 00:13:23.789398 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:23.819908 containerd[1486]: time="2026-04-28T00:13:23.819795932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c8wrg,Uid:b5223732-1093-4eae-9148-d0beec8c9693,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:23.826742 containerd[1486]: time="2026-04-28T00:13:23.826672387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668dcbc8f4-fhhp4,Uid:9772b8db-092d-4b31-bc9d-729c89bba451,Namespace:calico-system,Attempt:0,} returns sandbox id \"3855ab39255778b7b51fb3e77d12bc0d3112bd6dfd613fc870a68cbe8bccd535\"" Apr 28 00:13:23.830966 containerd[1486]: time="2026-04-28T00:13:23.830315354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 28 00:13:23.852930 containerd[1486]: time="2026-04-28T00:13:23.852438242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:23.852930 containerd[1486]: time="2026-04-28T00:13:23.852528579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:23.852930 containerd[1486]: time="2026-04-28T00:13:23.852545182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:23.852930 containerd[1486]: time="2026-04-28T00:13:23.852641040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:23.875962 systemd[1]: Started cri-containerd-e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84.scope - libcontainer container e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84. Apr 28 00:13:23.907565 containerd[1486]: time="2026-04-28T00:13:23.905651587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c8wrg,Uid:b5223732-1093-4eae-9148-d0beec8c9693,Namespace:calico-system,Attempt:0,} returns sandbox id \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\"" Apr 28 00:13:25.110746 kubelet[2562]: E0428 00:13:25.109826 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:25.580885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930258100.mount: Deactivated successfully. Apr 28 00:13:26.655768 containerd[1486]: time="2026-04-28T00:13:26.654855580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:26.657032 containerd[1486]: time="2026-04-28T00:13:26.656036090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32841445" Apr 28 00:13:26.657032 containerd[1486]: time="2026-04-28T00:13:26.656331377Z" level=info msg="ImageCreate event name:\"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:26.660310 containerd[1486]: time="2026-04-28T00:13:26.659563338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:26.660538 containerd[1486]: time="2026-04-28T00:13:26.660492127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"32841299\" in 2.830123003s" Apr 28 00:13:26.660575 containerd[1486]: time="2026-04-28T00:13:26.660539015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\"" Apr 28 00:13:26.662187 containerd[1486]: time="2026-04-28T00:13:26.662139872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 28 00:13:26.680900 containerd[1486]: time="2026-04-28T00:13:26.680849044Z" level=info msg="CreateContainer within sandbox \"3855ab39255778b7b51fb3e77d12bc0d3112bd6dfd613fc870a68cbe8bccd535\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 28 00:13:26.703424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484846260.mount: Deactivated successfully. Apr 28 00:13:26.709272 containerd[1486]: time="2026-04-28T00:13:26.709115595Z" level=info msg="CreateContainer within sandbox \"3855ab39255778b7b51fb3e77d12bc0d3112bd6dfd613fc870a68cbe8bccd535\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8a1e6b972d27a8a96d195b6513cb2c0a53411b64bf370dde317af6829ef492ad\"" Apr 28 00:13:26.709978 containerd[1486]: time="2026-04-28T00:13:26.709923645Z" level=info msg="StartContainer for \"8a1e6b972d27a8a96d195b6513cb2c0a53411b64bf370dde317af6829ef492ad\"" Apr 28 00:13:26.751153 systemd[1]: Started cri-containerd-8a1e6b972d27a8a96d195b6513cb2c0a53411b64bf370dde317af6829ef492ad.scope - libcontainer container 8a1e6b972d27a8a96d195b6513cb2c0a53411b64bf370dde317af6829ef492ad. Apr 28 00:13:26.799469 containerd[1486]: time="2026-04-28T00:13:26.799371285Z" level=info msg="StartContainer for \"8a1e6b972d27a8a96d195b6513cb2c0a53411b64bf370dde317af6829ef492ad\" returns successfully" Apr 28 00:13:27.111738 kubelet[2562]: E0428 00:13:27.110436 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:27.257844 kubelet[2562]: E0428 00:13:27.257182 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.257844 kubelet[2562]: W0428 00:13:27.257220 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.257844 kubelet[2562]: E0428 00:13:27.257276 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.259487 kubelet[2562]: E0428 00:13:27.258320 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.259487 kubelet[2562]: W0428 00:13:27.258337 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.259487 kubelet[2562]: E0428 00:13:27.258352 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.260172 kubelet[2562]: E0428 00:13:27.260153 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.260335 kubelet[2562]: W0428 00:13:27.260259 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.260335 kubelet[2562]: E0428 00:13:27.260283 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.261894 kubelet[2562]: E0428 00:13:27.261781 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.261894 kubelet[2562]: W0428 00:13:27.261801 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.261894 kubelet[2562]: E0428 00:13:27.261816 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.262431 kubelet[2562]: E0428 00:13:27.262327 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.262431 kubelet[2562]: W0428 00:13:27.262340 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.262431 kubelet[2562]: E0428 00:13:27.262352 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.262758 kubelet[2562]: E0428 00:13:27.262676 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.262758 kubelet[2562]: W0428 00:13:27.262688 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.262758 kubelet[2562]: E0428 00:13:27.262701 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.263140 kubelet[2562]: E0428 00:13:27.263080 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.263140 kubelet[2562]: W0428 00:13:27.263092 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.263140 kubelet[2562]: E0428 00:13:27.263102 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.264113 kubelet[2562]: E0428 00:13:27.263988 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.264113 kubelet[2562]: W0428 00:13:27.264001 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.264113 kubelet[2562]: E0428 00:13:27.264012 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.267138 kubelet[2562]: E0428 00:13:27.266963 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.267138 kubelet[2562]: W0428 00:13:27.266994 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.267138 kubelet[2562]: E0428 00:13:27.267015 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.267466 kubelet[2562]: E0428 00:13:27.267360 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.267466 kubelet[2562]: W0428 00:13:27.267371 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.267466 kubelet[2562]: E0428 00:13:27.267382 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.267749 kubelet[2562]: E0428 00:13:27.267566 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.267749 kubelet[2562]: W0428 00:13:27.267574 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.267749 kubelet[2562]: E0428 00:13:27.267583 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.267918 kubelet[2562]: E0428 00:13:27.267898 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.268098 kubelet[2562]: W0428 00:13:27.267974 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.268098 kubelet[2562]: E0428 00:13:27.267988 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.268775 kubelet[2562]: E0428 00:13:27.268758 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.268969 kubelet[2562]: W0428 00:13:27.268806 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.268969 kubelet[2562]: E0428 00:13:27.268819 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.269346 kubelet[2562]: E0428 00:13:27.269244 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.269346 kubelet[2562]: W0428 00:13:27.269257 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.269346 kubelet[2562]: E0428 00:13:27.269267 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.271754 kubelet[2562]: E0428 00:13:27.271630 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.271754 kubelet[2562]: W0428 00:13:27.271645 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.271754 kubelet[2562]: E0428 00:13:27.271708 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.286388 kubelet[2562]: E0428 00:13:27.286221 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.286388 kubelet[2562]: W0428 00:13:27.286245 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.286388 kubelet[2562]: E0428 00:13:27.286263 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.286868 kubelet[2562]: E0428 00:13:27.286701 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.286868 kubelet[2562]: W0428 00:13:27.286717 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.286868 kubelet[2562]: E0428 00:13:27.286736 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.288959 kubelet[2562]: E0428 00:13:27.288830 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.288959 kubelet[2562]: W0428 00:13:27.288882 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.288959 kubelet[2562]: E0428 00:13:27.288904 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.289909 kubelet[2562]: E0428 00:13:27.289722 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.289909 kubelet[2562]: W0428 00:13:27.289751 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.289909 kubelet[2562]: E0428 00:13:27.289774 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.290290 kubelet[2562]: E0428 00:13:27.290216 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.290290 kubelet[2562]: W0428 00:13:27.290231 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.290290 kubelet[2562]: E0428 00:13:27.290244 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.290785 kubelet[2562]: E0428 00:13:27.290694 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.290785 kubelet[2562]: W0428 00:13:27.290707 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.290785 kubelet[2562]: E0428 00:13:27.290724 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.291278 kubelet[2562]: E0428 00:13:27.291114 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.291278 kubelet[2562]: W0428 00:13:27.291129 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.291278 kubelet[2562]: E0428 00:13:27.291141 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.291892 kubelet[2562]: E0428 00:13:27.291841 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.292166 kubelet[2562]: W0428 00:13:27.292041 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.292166 kubelet[2562]: E0428 00:13:27.292059 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.292938 kubelet[2562]: E0428 00:13:27.292802 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.292938 kubelet[2562]: W0428 00:13:27.292825 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.292938 kubelet[2562]: E0428 00:13:27.292837 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.293237 kubelet[2562]: E0428 00:13:27.293193 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.293237 kubelet[2562]: W0428 00:13:27.293206 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.293237 kubelet[2562]: E0428 00:13:27.293216 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.294579 kubelet[2562]: E0428 00:13:27.293607 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.294579 kubelet[2562]: W0428 00:13:27.293619 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.294579 kubelet[2562]: E0428 00:13:27.293631 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.295253 kubelet[2562]: E0428 00:13:27.295117 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.295253 kubelet[2562]: W0428 00:13:27.295133 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.295253 kubelet[2562]: E0428 00:13:27.295148 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.295650 kubelet[2562]: E0428 00:13:27.295542 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.295650 kubelet[2562]: W0428 00:13:27.295555 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.295650 kubelet[2562]: E0428 00:13:27.295566 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.296289 kubelet[2562]: E0428 00:13:27.296036 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.296289 kubelet[2562]: W0428 00:13:27.296048 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.296289 kubelet[2562]: E0428 00:13:27.296059 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.296565 kubelet[2562]: E0428 00:13:27.296552 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.296633 kubelet[2562]: W0428 00:13:27.296622 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.296705 kubelet[2562]: E0428 00:13:27.296693 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.297086 kubelet[2562]: E0428 00:13:27.296974 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.297086 kubelet[2562]: W0428 00:13:27.296985 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.297086 kubelet[2562]: E0428 00:13:27.296995 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.297585 kubelet[2562]: E0428 00:13:27.297318 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.297585 kubelet[2562]: W0428 00:13:27.297330 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.297585 kubelet[2562]: E0428 00:13:27.297340 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:27.298032 kubelet[2562]: E0428 00:13:27.297980 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:27.298142 kubelet[2562]: W0428 00:13:27.298101 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:27.298228 kubelet[2562]: E0428 00:13:27.298197 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.232534 kubelet[2562]: I0428 00:13:28.231794 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:13:28.283140 kubelet[2562]: E0428 00:13:28.282923 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.283140 kubelet[2562]: W0428 00:13:28.282945 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.283140 kubelet[2562]: E0428 00:13:28.282965 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.283526 kubelet[2562]: E0428 00:13:28.283393 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.283526 kubelet[2562]: W0428 00:13:28.283406 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.283526 kubelet[2562]: E0428 00:13:28.283419 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.283840 kubelet[2562]: E0428 00:13:28.283631 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.283840 kubelet[2562]: W0428 00:13:28.283640 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.283840 kubelet[2562]: E0428 00:13:28.283651 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.284384 kubelet[2562]: E0428 00:13:28.284138 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.284384 kubelet[2562]: W0428 00:13:28.284153 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.284384 kubelet[2562]: E0428 00:13:28.284164 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.284570 kubelet[2562]: E0428 00:13:28.284553 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.284612 kubelet[2562]: W0428 00:13:28.284571 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.284612 kubelet[2562]: E0428 00:13:28.284585 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.284773 kubelet[2562]: E0428 00:13:28.284761 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.284813 kubelet[2562]: W0428 00:13:28.284775 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.284813 kubelet[2562]: E0428 00:13:28.284784 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.284935 kubelet[2562]: E0428 00:13:28.284925 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.284935 kubelet[2562]: W0428 00:13:28.284935 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285010 kubelet[2562]: E0428 00:13:28.284944 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285132 kubelet[2562]: E0428 00:13:28.285121 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.285168 kubelet[2562]: W0428 00:13:28.285132 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285168 kubelet[2562]: E0428 00:13:28.285142 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285320 kubelet[2562]: E0428 00:13:28.285310 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.285320 kubelet[2562]: W0428 00:13:28.285320 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285383 kubelet[2562]: E0428 00:13:28.285329 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285465 kubelet[2562]: E0428 00:13:28.285456 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.285497 kubelet[2562]: W0428 00:13:28.285465 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285497 kubelet[2562]: E0428 00:13:28.285473 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285607 kubelet[2562]: E0428 00:13:28.285598 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.285607 kubelet[2562]: W0428 00:13:28.285607 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285692 kubelet[2562]: E0428 00:13:28.285615 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285787 kubelet[2562]: E0428 00:13:28.285776 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.285815 kubelet[2562]: W0428 00:13:28.285787 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.285815 kubelet[2562]: E0428 00:13:28.285796 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.285954 kubelet[2562]: E0428 00:13:28.285944 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.286056 kubelet[2562]: W0428 00:13:28.285954 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.286056 kubelet[2562]: E0428 00:13:28.285963 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.286134 kubelet[2562]: E0428 00:13:28.286119 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.286134 kubelet[2562]: W0428 00:13:28.286132 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.286197 kubelet[2562]: E0428 00:13:28.286142 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.286298 kubelet[2562]: E0428 00:13:28.286288 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.286298 kubelet[2562]: W0428 00:13:28.286298 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.286357 kubelet[2562]: E0428 00:13:28.286306 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.298601 kubelet[2562]: E0428 00:13:28.298355 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.298601 kubelet[2562]: W0428 00:13:28.298394 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.298601 kubelet[2562]: E0428 00:13:28.298422 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.299064 kubelet[2562]: E0428 00:13:28.298929 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.299064 kubelet[2562]: W0428 00:13:28.298965 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.299064 kubelet[2562]: E0428 00:13:28.298980 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.299333 kubelet[2562]: E0428 00:13:28.299307 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.299333 kubelet[2562]: W0428 00:13:28.299323 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.299464 kubelet[2562]: E0428 00:13:28.299338 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.299581 kubelet[2562]: E0428 00:13:28.299567 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.299581 kubelet[2562]: W0428 00:13:28.299580 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.299698 kubelet[2562]: E0428 00:13:28.299591 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.300215 kubelet[2562]: E0428 00:13:28.300181 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.300215 kubelet[2562]: W0428 00:13:28.300203 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.300297 kubelet[2562]: E0428 00:13:28.300217 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.301301 kubelet[2562]: E0428 00:13:28.301269 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.301301 kubelet[2562]: W0428 00:13:28.301292 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301314 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301523 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.302344 kubelet[2562]: W0428 00:13:28.301533 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301546 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301729 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.302344 kubelet[2562]: W0428 00:13:28.301739 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301749 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301941 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.302344 kubelet[2562]: W0428 00:13:28.301951 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302344 kubelet[2562]: E0428 00:13:28.301960 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.302560 kubelet[2562]: E0428 00:13:28.302190 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.302560 kubelet[2562]: W0428 00:13:28.302202 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302560 kubelet[2562]: E0428 00:13:28.302213 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.302560 kubelet[2562]: E0428 00:13:28.302437 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.302560 kubelet[2562]: W0428 00:13:28.302448 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.302560 kubelet[2562]: E0428 00:13:28.302458 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.303868 kubelet[2562]: E0428 00:13:28.302957 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.303868 kubelet[2562]: W0428 00:13:28.302971 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.303868 kubelet[2562]: E0428 00:13:28.302984 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.303868 kubelet[2562]: E0428 00:13:28.303376 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.303868 kubelet[2562]: W0428 00:13:28.303413 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.303868 kubelet[2562]: E0428 00:13:28.303428 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.304515 kubelet[2562]: E0428 00:13:28.304333 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.304515 kubelet[2562]: W0428 00:13:28.304352 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.304515 kubelet[2562]: E0428 00:13:28.304367 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.304751 kubelet[2562]: E0428 00:13:28.304621 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.304751 kubelet[2562]: W0428 00:13:28.304631 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.304751 kubelet[2562]: E0428 00:13:28.304641 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.304883 kubelet[2562]: E0428 00:13:28.304871 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.304883 kubelet[2562]: W0428 00:13:28.304883 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.304942 kubelet[2562]: E0428 00:13:28.304892 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.305118 kubelet[2562]: E0428 00:13:28.305106 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.305150 kubelet[2562]: W0428 00:13:28.305138 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.305150 kubelet[2562]: E0428 00:13:28.305147 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.305496 kubelet[2562]: E0428 00:13:28.305482 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 28 00:13:28.305496 kubelet[2562]: W0428 00:13:28.305495 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 28 00:13:28.305561 kubelet[2562]: E0428 00:13:28.305507 2562 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 28 00:13:28.376206 containerd[1486]: time="2026-04-28T00:13:28.376106961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:28.378333 containerd[1486]: time="2026-04-28T00:13:28.378274596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4404646" Apr 28 00:13:28.380864 containerd[1486]: time="2026-04-28T00:13:28.380802684Z" level=info msg="ImageCreate event name:\"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:28.383347 containerd[1486]: time="2026-04-28T00:13:28.383284685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:28.384192 containerd[1486]: time="2026-04-28T00:13:28.384116446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"6980245\" in 1.721934007s" Apr 28 00:13:28.384192 containerd[1486]: time="2026-04-28T00:13:28.384161893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\"" Apr 28 00:13:28.392038 containerd[1486]: time="2026-04-28T00:13:28.391841690Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 28 00:13:28.416329 containerd[1486]: time="2026-04-28T00:13:28.416143224Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a\"" Apr 28 00:13:28.418361 containerd[1486]: time="2026-04-28T00:13:28.417297712Z" level=info msg="StartContainer for \"110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a\"" Apr 28 00:13:28.458932 systemd[1]: Started cri-containerd-110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a.scope - libcontainer container 110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a. Apr 28 00:13:28.494230 containerd[1486]: time="2026-04-28T00:13:28.494104404Z" level=info msg="StartContainer for \"110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a\" returns successfully" Apr 28 00:13:28.510699 systemd[1]: cri-containerd-110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a.scope: Deactivated successfully. Apr 28 00:13:28.634995 containerd[1486]: time="2026-04-28T00:13:28.634920605Z" level=info msg="shim disconnected" id=110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a namespace=k8s.io Apr 28 00:13:28.635497 containerd[1486]: time="2026-04-28T00:13:28.635463884Z" level=warning msg="cleaning up after shim disconnected" id=110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a namespace=k8s.io Apr 28 00:13:28.635629 containerd[1486]: time="2026-04-28T00:13:28.635612666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:13:28.670150 systemd[1]: run-containerd-runc-k8s.io-110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a-runc.CwEUpk.mount: Deactivated successfully. Apr 28 00:13:28.670284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110cffb5ac33072b7830417b32be081be612c4e1717954048025decfd860ff8a-rootfs.mount: Deactivated successfully. Apr 28 00:13:29.111720 kubelet[2562]: E0428 00:13:29.110840 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:29.241730 containerd[1486]: time="2026-04-28T00:13:29.241625678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 28 00:13:29.272459 kubelet[2562]: I0428 00:13:29.272336 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-668dcbc8f4-fhhp4" podStartSLOduration=3.439821038 podStartE2EDuration="6.272309245s" podCreationTimestamp="2026-04-28 00:13:23 +0000 UTC" firstStartedPulling="2026-04-28 00:13:23.829514523 +0000 UTC m=+22.859335774" lastFinishedPulling="2026-04-28 00:13:26.66200277 +0000 UTC m=+25.691823981" observedRunningTime="2026-04-28 00:13:27.279528396 +0000 UTC m=+26.309349647" watchObservedRunningTime="2026-04-28 00:13:29.272309245 +0000 UTC m=+28.302130456" Apr 28 00:13:31.111551 kubelet[2562]: E0428 00:13:31.110237 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:33.111712 kubelet[2562]: E0428 00:13:33.110992 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:35.110444 kubelet[2562]: E0428 00:13:35.109833 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:36.717143 kubelet[2562]: I0428 00:13:36.716495 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:13:37.113183 kubelet[2562]: E0428 00:13:37.110852 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:37.895821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720868520.mount: Deactivated successfully. Apr 28 00:13:37.931524 containerd[1486]: time="2026-04-28T00:13:37.929986127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:37.931524 containerd[1486]: time="2026-04-28T00:13:37.931452108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=153029581" Apr 28 00:13:37.932527 containerd[1486]: time="2026-04-28T00:13:37.932476686Z" level=info msg="ImageCreate event name:\"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:37.935577 containerd[1486]: time="2026-04-28T00:13:37.935526698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:37.936286 containerd[1486]: time="2026-04-28T00:13:37.936237086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"153029443\" in 8.694535638s" Apr 28 00:13:37.936286 containerd[1486]: time="2026-04-28T00:13:37.936284290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\"" Apr 28 00:13:37.947542 containerd[1486]: time="2026-04-28T00:13:37.947474682Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 28 00:13:37.970400 containerd[1486]: time="2026-04-28T00:13:37.970297587Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb\"" Apr 28 00:13:37.972455 containerd[1486]: time="2026-04-28T00:13:37.972358544Z" level=info msg="StartContainer for \"af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb\"" Apr 28 00:13:38.010972 systemd[1]: Started cri-containerd-af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb.scope - libcontainer container af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb. Apr 28 00:13:38.053494 containerd[1486]: time="2026-04-28T00:13:38.053301167Z" level=info msg="StartContainer for \"af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb\" returns successfully" Apr 28 00:13:38.162020 systemd[1]: cri-containerd-af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb.scope: Deactivated successfully. Apr 28 00:13:38.306771 containerd[1486]: time="2026-04-28T00:13:38.306230627Z" level=info msg="shim disconnected" id=af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb namespace=k8s.io Apr 28 00:13:38.306771 containerd[1486]: time="2026-04-28T00:13:38.306293913Z" level=warning msg="cleaning up after shim disconnected" id=af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb namespace=k8s.io Apr 28 00:13:38.306771 containerd[1486]: time="2026-04-28T00:13:38.306302114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:13:38.895181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af1109aaf76c2e39686cc4dba2c84ecb46ffbe26ccd0fb1889af4398b09a54bb-rootfs.mount: Deactivated successfully. Apr 28 00:13:39.110028 kubelet[2562]: E0428 00:13:39.109901 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:39.282896 containerd[1486]: time="2026-04-28T00:13:39.281891797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 28 00:13:41.109986 kubelet[2562]: E0428 00:13:41.109923 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:43.110643 kubelet[2562]: E0428 00:13:43.110524 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-687qr" podUID="5fd7a659-0a04-4180-b5ea-79c09071b35c" Apr 28 00:13:43.422726 containerd[1486]: time="2026-04-28T00:13:43.421246327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:43.423569 containerd[1486]: time="2026-04-28T00:13:43.423448934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=62266008" Apr 28 00:13:43.424332 containerd[1486]: time="2026-04-28T00:13:43.424224512Z" level=info msg="ImageCreate event name:\"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:43.428919 containerd[1486]: time="2026-04-28T00:13:43.428755054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:43.430595 containerd[1486]: time="2026-04-28T00:13:43.429647161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"64841647\" in 4.147709801s" Apr 28 00:13:43.430595 containerd[1486]: time="2026-04-28T00:13:43.429726047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\"" Apr 28 00:13:43.436034 containerd[1486]: time="2026-04-28T00:13:43.435768904Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 28 00:13:43.460709 containerd[1486]: time="2026-04-28T00:13:43.459072263Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d\"" Apr 28 00:13:43.461961 containerd[1486]: time="2026-04-28T00:13:43.461917477Z" level=info msg="StartContainer for \"cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d\"" Apr 28 00:13:43.502328 systemd[1]: run-containerd-runc-k8s.io-cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d-runc.1pL5UI.mount: Deactivated successfully. Apr 28 00:13:43.510019 systemd[1]: Started cri-containerd-cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d.scope - libcontainer container cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d. Apr 28 00:13:43.552423 containerd[1486]: time="2026-04-28T00:13:43.552353424Z" level=info msg="StartContainer for \"cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d\" returns successfully" Apr 28 00:13:44.116786 containerd[1486]: time="2026-04-28T00:13:44.116568744Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:13:44.121730 systemd[1]: cri-containerd-cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d.scope: Deactivated successfully. Apr 28 00:13:44.154815 kubelet[2562]: I0428 00:13:44.154766 2562 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 28 00:13:44.160755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d-rootfs.mount: Deactivated successfully. Apr 28 00:13:44.224072 containerd[1486]: time="2026-04-28T00:13:44.223988485Z" level=info msg="shim disconnected" id=cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d namespace=k8s.io Apr 28 00:13:44.224072 containerd[1486]: time="2026-04-28T00:13:44.224063891Z" level=warning msg="cleaning up after shim disconnected" id=cefc97adc162c16a62155f17c276dd84c5f0a8fc6b8cfb566274540827af504d namespace=k8s.io Apr 28 00:13:44.224072 containerd[1486]: time="2026-04-28T00:13:44.224073451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:13:44.268201 systemd[1]: Created slice kubepods-besteffort-podbb42889c_dc68_4341_9c32_e020808fbd20.slice - libcontainer container kubepods-besteffort-podbb42889c_dc68_4341_9c32_e020808fbd20.slice. Apr 28 00:13:44.290024 systemd[1]: Created slice kubepods-burstable-pod26e97b01_fe08_43da_b0e6_c4023295e62a.slice - libcontainer container kubepods-burstable-pod26e97b01_fe08_43da_b0e6_c4023295e62a.slice. Apr 28 00:13:44.306643 systemd[1]: Created slice kubepods-burstable-podb726e622_d473_4766_8608_042b869c5024.slice - libcontainer container kubepods-burstable-podb726e622_d473_4766_8608_042b869c5024.slice. Apr 28 00:13:44.319135 systemd[1]: Created slice kubepods-besteffort-podc14518d0_875d_4e4a_bf96_4a7030b764e9.slice - libcontainer container kubepods-besteffort-podc14518d0_875d_4e4a_bf96_4a7030b764e9.slice. Apr 28 00:13:44.321577 kubelet[2562]: I0428 00:13:44.321336 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-nginx-config\") pod \"whisker-95fd656b5-hrhjc\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " pod="calico-system/whisker-95fd656b5-hrhjc" Apr 28 00:13:44.321577 kubelet[2562]: I0428 00:13:44.321436 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb42889c-dc68-4341-9c32-e020808fbd20-tigera-ca-bundle\") pod \"calico-kube-controllers-7db98b7b86-8stzl\" (UID: \"bb42889c-dc68-4341-9c32-e020808fbd20\") " pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" Apr 28 00:13:44.321577 kubelet[2562]: I0428 00:13:44.321462 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9z6\" (UniqueName: \"kubernetes.io/projected/bb42889c-dc68-4341-9c32-e020808fbd20-kube-api-access-8n9z6\") pod \"calico-kube-controllers-7db98b7b86-8stzl\" (UID: \"bb42889c-dc68-4341-9c32-e020808fbd20\") " pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" Apr 28 00:13:44.321577 kubelet[2562]: I0428 00:13:44.321478 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26e97b01-fe08-43da-b0e6-c4023295e62a-config-volume\") pod \"coredns-7d764666f9-ghw74\" (UID: \"26e97b01-fe08-43da-b0e6-c4023295e62a\") " pod="kube-system/coredns-7d764666f9-ghw74" Apr 28 00:13:44.321577 kubelet[2562]: I0428 00:13:44.321546 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshtt\" (UniqueName: \"kubernetes.io/projected/09ed7117-8974-4541-abe8-28a9ca0d9670-kube-api-access-lshtt\") pod \"calico-apiserver-748c896d4f-kszc5\" (UID: \"09ed7117-8974-4541-abe8-28a9ca0d9670\") " pod="calico-system/calico-apiserver-748c896d4f-kszc5" Apr 28 00:13:44.322161 kubelet[2562]: I0428 00:13:44.321948 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rn7b\" (UniqueName: \"kubernetes.io/projected/cde2ebe5-98ed-4186-a2bd-c3bd8f220759-kube-api-access-5rn7b\") pod \"calico-apiserver-748c896d4f-j7hjq\" (UID: \"cde2ebe5-98ed-4186-a2bd-c3bd8f220759\") " pod="calico-system/calico-apiserver-748c896d4f-j7hjq" Apr 28 00:13:44.322161 kubelet[2562]: I0428 00:13:44.321978 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-backend-key-pair\") pod \"whisker-95fd656b5-hrhjc\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " pod="calico-system/whisker-95fd656b5-hrhjc" Apr 28 00:13:44.322161 kubelet[2562]: I0428 00:13:44.322014 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm662\" (UniqueName: \"kubernetes.io/projected/26e97b01-fe08-43da-b0e6-c4023295e62a-kube-api-access-wm662\") pod \"coredns-7d764666f9-ghw74\" (UID: \"26e97b01-fe08-43da-b0e6-c4023295e62a\") " pod="kube-system/coredns-7d764666f9-ghw74" Apr 28 00:13:44.322161 kubelet[2562]: I0428 00:13:44.322032 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-ca-bundle\") pod \"whisker-95fd656b5-hrhjc\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " pod="calico-system/whisker-95fd656b5-hrhjc" Apr 28 00:13:44.322161 kubelet[2562]: I0428 00:13:44.322052 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ns4x\" (UniqueName: \"kubernetes.io/projected/0057e9c9-3060-413e-8c72-3f0c08b487fd-kube-api-access-9ns4x\") pod \"whisker-95fd656b5-hrhjc\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " pod="calico-system/whisker-95fd656b5-hrhjc" Apr 28 00:13:44.322299 kubelet[2562]: I0428 00:13:44.322084 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd6rj\" (UniqueName: \"kubernetes.io/projected/b726e622-d473-4766-8608-042b869c5024-kube-api-access-pd6rj\") pod \"coredns-7d764666f9-7rl8q\" (UID: \"b726e622-d473-4766-8608-042b869c5024\") " pod="kube-system/coredns-7d764666f9-7rl8q" Apr 28 00:13:44.322299 kubelet[2562]: I0428 00:13:44.322114 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rjp\" (UniqueName: \"kubernetes.io/projected/c14518d0-875d-4e4a-bf96-4a7030b764e9-kube-api-access-w6rjp\") pod \"goldmane-7fb6cdc5d9-6h4jt\" (UID: \"c14518d0-875d-4e4a-bf96-4a7030b764e9\") " pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:44.322299 kubelet[2562]: I0428 00:13:44.322145 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14518d0-875d-4e4a-bf96-4a7030b764e9-config\") pod \"goldmane-7fb6cdc5d9-6h4jt\" (UID: \"c14518d0-875d-4e4a-bf96-4a7030b764e9\") " pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:44.322577 kubelet[2562]: I0428 00:13:44.322393 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c14518d0-875d-4e4a-bf96-4a7030b764e9-goldmane-ca-bundle\") pod \"goldmane-7fb6cdc5d9-6h4jt\" (UID: \"c14518d0-875d-4e4a-bf96-4a7030b764e9\") " pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:44.322577 kubelet[2562]: I0428 00:13:44.322417 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c14518d0-875d-4e4a-bf96-4a7030b764e9-goldmane-key-pair\") pod \"goldmane-7fb6cdc5d9-6h4jt\" (UID: \"c14518d0-875d-4e4a-bf96-4a7030b764e9\") " pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:44.322577 kubelet[2562]: I0428 00:13:44.322436 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09ed7117-8974-4541-abe8-28a9ca0d9670-calico-apiserver-certs\") pod \"calico-apiserver-748c896d4f-kszc5\" (UID: \"09ed7117-8974-4541-abe8-28a9ca0d9670\") " pod="calico-system/calico-apiserver-748c896d4f-kszc5" Apr 28 00:13:44.324410 kubelet[2562]: I0428 00:13:44.324288 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cde2ebe5-98ed-4186-a2bd-c3bd8f220759-calico-apiserver-certs\") pod \"calico-apiserver-748c896d4f-j7hjq\" (UID: \"cde2ebe5-98ed-4186-a2bd-c3bd8f220759\") " pod="calico-system/calico-apiserver-748c896d4f-j7hjq" Apr 28 00:13:44.324410 kubelet[2562]: I0428 00:13:44.324357 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b726e622-d473-4766-8608-042b869c5024-config-volume\") pod \"coredns-7d764666f9-7rl8q\" (UID: \"b726e622-d473-4766-8608-042b869c5024\") " pod="kube-system/coredns-7d764666f9-7rl8q" Apr 28 00:13:44.334611 systemd[1]: Created slice kubepods-besteffort-podcde2ebe5_98ed_4186_a2bd_c3bd8f220759.slice - libcontainer container kubepods-besteffort-podcde2ebe5_98ed_4186_a2bd_c3bd8f220759.slice. Apr 28 00:13:44.347278 containerd[1486]: time="2026-04-28T00:13:44.346393877Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 28 00:13:44.350469 systemd[1]: Created slice kubepods-besteffort-pod09ed7117_8974_4541_abe8_28a9ca0d9670.slice - libcontainer container kubepods-besteffort-pod09ed7117_8974_4541_abe8_28a9ca0d9670.slice. Apr 28 00:13:44.360413 systemd[1]: Created slice kubepods-besteffort-pod0057e9c9_3060_413e_8c72_3f0c08b487fd.slice - libcontainer container kubepods-besteffort-pod0057e9c9_3060_413e_8c72_3f0c08b487fd.slice. Apr 28 00:13:44.379441 containerd[1486]: time="2026-04-28T00:13:44.379026053Z" level=info msg="CreateContainer within sandbox \"e76279bb5a0f36f54a215cbd3bb7fc3b8fa2bdd920be27dc6e5ad9d86ad50b84\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a\"" Apr 28 00:13:44.382702 containerd[1486]: time="2026-04-28T00:13:44.382389618Z" level=info msg="StartContainer for \"4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a\"" Apr 28 00:13:44.450161 systemd[1]: Started cri-containerd-4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a.scope - libcontainer container 4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a. Apr 28 00:13:44.571092 containerd[1486]: time="2026-04-28T00:13:44.570935026Z" level=info msg="StartContainer for \"4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a\" returns successfully" Apr 28 00:13:44.583639 containerd[1486]: time="2026-04-28T00:13:44.583563025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db98b7b86-8stzl,Uid:bb42889c-dc68-4341-9c32-e020808fbd20,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:44.600133 containerd[1486]: time="2026-04-28T00:13:44.600018143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ghw74,Uid:26e97b01-fe08-43da-b0e6-c4023295e62a,Namespace:kube-system,Attempt:0,}" Apr 28 00:13:44.624058 containerd[1486]: time="2026-04-28T00:13:44.623750631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7rl8q,Uid:b726e622-d473-4766-8608-042b869c5024,Namespace:kube-system,Attempt:0,}" Apr 28 00:13:44.638346 containerd[1486]: time="2026-04-28T00:13:44.638283569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-6h4jt,Uid:c14518d0-875d-4e4a-bf96-4a7030b764e9,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:44.646908 containerd[1486]: time="2026-04-28T00:13:44.646825311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-j7hjq,Uid:cde2ebe5-98ed-4186-a2bd-c3bd8f220759,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:44.664044 containerd[1486]: time="2026-04-28T00:13:44.663749784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-kszc5,Uid:09ed7117-8974-4541-abe8-28a9ca0d9670,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:44.673108 containerd[1486]: time="2026-04-28T00:13:44.673058261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95fd656b5-hrhjc,Uid:0057e9c9-3060-413e-8c72-3f0c08b487fd,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:44.946865 containerd[1486]: time="2026-04-28T00:13:44.946199348Z" level=error msg="Failed to destroy network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.954720 containerd[1486]: time="2026-04-28T00:13:44.953935311Z" level=error msg="encountered an error cleaning up failed sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.955210 containerd[1486]: time="2026-04-28T00:13:44.955155720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db98b7b86-8stzl,Uid:bb42889c-dc68-4341-9c32-e020808fbd20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.958864 kubelet[2562]: E0428 00:13:44.958789 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.959003 kubelet[2562]: E0428 00:13:44.958892 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" Apr 28 00:13:44.959003 kubelet[2562]: E0428 00:13:44.958914 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" Apr 28 00:13:44.959235 kubelet[2562]: E0428 00:13:44.958996 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db98b7b86-8stzl_calico-system(bb42889c-dc68-4341-9c32-e020808fbd20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db98b7b86-8stzl_calico-system(bb42889c-dc68-4341-9c32-e020808fbd20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" podUID="bb42889c-dc68-4341-9c32-e020808fbd20" Apr 28 00:13:44.984923 containerd[1486]: time="2026-04-28T00:13:44.984861563Z" level=error msg="Failed to destroy network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.987387 containerd[1486]: time="2026-04-28T00:13:44.987168251Z" level=error msg="encountered an error cleaning up failed sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.987387 containerd[1486]: time="2026-04-28T00:13:44.987269659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7rl8q,Uid:b726e622-d473-4766-8608-042b869c5024,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.990246 containerd[1486]: time="2026-04-28T00:13:44.990086544Z" level=error msg="Failed to destroy network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.990494 kubelet[2562]: E0428 00:13:44.990223 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:44.990494 kubelet[2562]: E0428 00:13:44.990299 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7rl8q" Apr 28 00:13:44.990494 kubelet[2562]: E0428 00:13:44.990321 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7rl8q" Apr 28 00:13:44.990621 kubelet[2562]: E0428 00:13:44.990382 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-7rl8q_kube-system(b726e622-d473-4766-8608-042b869c5024)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-7rl8q_kube-system(b726e622-d473-4766-8608-042b869c5024)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-7rl8q" podUID="b726e622-d473-4766-8608-042b869c5024" Apr 28 00:13:45.000359 containerd[1486]: time="2026-04-28T00:13:45.000203040Z" level=error msg="encountered an error cleaning up failed sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.001122 containerd[1486]: time="2026-04-28T00:13:45.000730918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ghw74,Uid:26e97b01-fe08-43da-b0e6-c4023295e62a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.001436 kubelet[2562]: E0428 00:13:45.001344 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.001436 kubelet[2562]: E0428 00:13:45.001421 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ghw74" Apr 28 00:13:45.001994 kubelet[2562]: E0428 00:13:45.001443 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ghw74" Apr 28 00:13:45.001994 kubelet[2562]: E0428 00:13:45.001515 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ghw74_kube-system(26e97b01-fe08-43da-b0e6-c4023295e62a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ghw74_kube-system(26e97b01-fe08-43da-b0e6-c4023295e62a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ghw74" podUID="26e97b01-fe08-43da-b0e6-c4023295e62a" Apr 28 00:13:45.092636 containerd[1486]: time="2026-04-28T00:13:45.092556734Z" level=error msg="Failed to destroy network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.096995 containerd[1486]: time="2026-04-28T00:13:45.095953612Z" level=error msg="encountered an error cleaning up failed sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.097336 containerd[1486]: time="2026-04-28T00:13:45.097278185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-6h4jt,Uid:c14518d0-875d-4e4a-bf96-4a7030b764e9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.098962 kubelet[2562]: E0428 00:13:45.098534 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.098962 kubelet[2562]: E0428 00:13:45.098625 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:45.098962 kubelet[2562]: E0428 00:13:45.098646 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" Apr 28 00:13:45.099303 kubelet[2562]: E0428 00:13:45.098726 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7fb6cdc5d9-6h4jt_calico-system(c14518d0-875d-4e4a-bf96-4a7030b764e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7fb6cdc5d9-6h4jt_calico-system(c14518d0-875d-4e4a-bf96-4a7030b764e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" podUID="c14518d0-875d-4e4a-bf96-4a7030b764e9" Apr 28 00:13:45.100525 containerd[1486]: time="2026-04-28T00:13:45.099926172Z" level=error msg="Failed to destroy network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.102200 containerd[1486]: time="2026-04-28T00:13:45.101954914Z" level=error msg="encountered an error cleaning up failed sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.102200 containerd[1486]: time="2026-04-28T00:13:45.102060722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-j7hjq,Uid:cde2ebe5-98ed-4186-a2bd-c3bd8f220759,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.103963 kubelet[2562]: E0428 00:13:45.103902 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.104100 kubelet[2562]: E0428 00:13:45.103984 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-748c896d4f-j7hjq" Apr 28 00:13:45.104100 kubelet[2562]: E0428 00:13:45.104006 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-748c896d4f-j7hjq" Apr 28 00:13:45.104100 kubelet[2562]: E0428 00:13:45.104067 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748c896d4f-j7hjq_calico-system(cde2ebe5-98ed-4186-a2bd-c3bd8f220759)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748c896d4f-j7hjq_calico-system(cde2ebe5-98ed-4186-a2bd-c3bd8f220759)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-748c896d4f-j7hjq" podUID="cde2ebe5-98ed-4186-a2bd-c3bd8f220759" Apr 28 00:13:45.134090 systemd[1]: Created slice kubepods-besteffort-pod5fd7a659_0a04_4180_b5ea_79c09071b35c.slice - libcontainer container kubepods-besteffort-pod5fd7a659_0a04_4180_b5ea_79c09071b35c.slice. Apr 28 00:13:45.148158 containerd[1486]: time="2026-04-28T00:13:45.147356226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-687qr,Uid:5fd7a659-0a04-4180-b5ea-79c09071b35c,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:45.317035 kubelet[2562]: I0428 00:13:45.315523 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:13:45.323135 containerd[1486]: time="2026-04-28T00:13:45.321977981Z" level=info msg="StopPodSandbox for \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\"" Apr 28 00:13:45.323135 containerd[1486]: time="2026-04-28T00:13:45.322210318Z" level=info msg="Ensure that sandbox 36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67 in task-service has been cleanup successfully" Apr 28 00:13:45.333744 kubelet[2562]: I0428 00:13:45.333454 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.201 [INFO][3649] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.201 [INFO][3649] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" iface="eth0" netns="/var/run/netns/cni-dc27b0c8-13c6-6c8b-4748-9edb5de2cc2e" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.202 [INFO][3649] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" iface="eth0" netns="/var/run/netns/cni-dc27b0c8-13c6-6c8b-4748-9edb5de2cc2e" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.203 [INFO][3649] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" iface="eth0" netns="/var/run/netns/cni-dc27b0c8-13c6-6c8b-4748-9edb5de2cc2e" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.203 [INFO][3649] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.203 [INFO][3649] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.285 [INFO][3688] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" HandleID="k8s-pod-network.cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--95fd656b5--hrhjc-eth0" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.286 [INFO][3688] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.286 [INFO][3688] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.304 [WARNING][3688] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" HandleID="k8s-pod-network.cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--95fd656b5--hrhjc-eth0" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.304 [INFO][3688] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" HandleID="k8s-pod-network.cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--95fd656b5--hrhjc-eth0" Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.309 [INFO][3688] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:45.337359 containerd[1486]: 2026-04-28 00:13:45.316 [INFO][3649] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493" Apr 28 00:13:45.338031 containerd[1486]: time="2026-04-28T00:13:45.337431708Z" level=info msg="StopPodSandbox for \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\"" Apr 28 00:13:45.338031 containerd[1486]: time="2026-04-28T00:13:45.337795613Z" level=info msg="Ensure that sandbox 9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2 in task-service has been cleanup successfully" Apr 28 00:13:45.343170 kubelet[2562]: I0428 00:13:45.343096 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:13:45.349466 containerd[1486]: time="2026-04-28T00:13:45.349415150Z" level=info msg="StopPodSandbox for \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\"" Apr 28 00:13:45.352012 containerd[1486]: time="2026-04-28T00:13:45.350983460Z" level=info msg="Ensure that sandbox cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d in task-service has been cleanup successfully" Apr 28 00:13:45.357321 kubelet[2562]: I0428 00:13:45.355173 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:13:45.362118 containerd[1486]: time="2026-04-28T00:13:45.362045118Z" level=info msg="StopPodSandbox for \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\"" Apr 28 00:13:45.362765 containerd[1486]: time="2026-04-28T00:13:45.362733766Z" level=info msg="Ensure that sandbox e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663 in task-service has been cleanup successfully" Apr 28 00:13:45.371986 containerd[1486]: time="2026-04-28T00:13:45.371927573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95fd656b5-hrhjc,Uid:0057e9c9-3060-413e-8c72-3f0c08b487fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.373070 kubelet[2562]: I0428 00:13:45.372937 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:13:45.374390 kubelet[2562]: E0428 00:13:45.373423 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.374390 kubelet[2562]: E0428 00:13:45.374186 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf06f4578e2e208ec5862006cea935d0944857d31ec7d4361858e79b54c55493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95fd656b5-hrhjc" Apr 28 00:13:45.376198 containerd[1486]: time="2026-04-28T00:13:45.375836807Z" level=info msg="StopPodSandbox for \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\"" Apr 28 00:13:45.376198 containerd[1486]: time="2026-04-28T00:13:45.376053623Z" level=info msg="Ensure that sandbox 8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32 in task-service has been cleanup successfully" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.193 [INFO][3661] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.193 [INFO][3661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" iface="eth0" netns="/var/run/netns/cni-d3dd9abf-3119-79c6-3f90-fa83d7f99bcf" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.197 [INFO][3661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" iface="eth0" netns="/var/run/netns/cni-d3dd9abf-3119-79c6-3f90-fa83d7f99bcf" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.197 [INFO][3661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" iface="eth0" netns="/var/run/netns/cni-d3dd9abf-3119-79c6-3f90-fa83d7f99bcf" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.197 [INFO][3661] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.197 [INFO][3661] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.292 [INFO][3685] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" HandleID="k8s-pod-network.085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.296 [INFO][3685] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.311 [INFO][3685] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.343 [WARNING][3685] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" HandleID="k8s-pod-network.085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.346 [INFO][3685] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" HandleID="k8s-pod-network.085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.352 [INFO][3685] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:45.383464 containerd[1486]: 2026-04-28 00:13:45.363 [INFO][3661] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449" Apr 28 00:13:45.418013 containerd[1486]: time="2026-04-28T00:13:45.417133071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-kszc5,Uid:09ed7117-8974-4541-abe8-28a9ca0d9670,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.419980 kubelet[2562]: E0428 00:13:45.419921 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 28 00:13:45.420153 kubelet[2562]: E0428 00:13:45.419999 2562 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-748c896d4f-kszc5" Apr 28 00:13:45.420153 kubelet[2562]: E0428 00:13:45.420025 2562 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-748c896d4f-kszc5" Apr 28 00:13:45.420153 kubelet[2562]: E0428 00:13:45.420091 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748c896d4f-kszc5_calico-system(09ed7117-8974-4541-abe8-28a9ca0d9670)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748c896d4f-kszc5_calico-system(09ed7117-8974-4541-abe8-28a9ca0d9670)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"085486ed959647c0c5a52faf135137ad6768ca63429f5a17f3c887a7c3221449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-748c896d4f-kszc5" podUID="09ed7117-8974-4541-abe8-28a9ca0d9670" Apr 28 00:13:45.689695 kubelet[2562]: I0428 00:13:45.688944 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-c8wrg" podStartSLOduration=2.28704673 podStartE2EDuration="22.688923097s" podCreationTimestamp="2026-04-28 00:13:23 +0000 UTC" firstStartedPulling="2026-04-28 00:13:23.910604921 +0000 UTC m=+22.940426172" lastFinishedPulling="2026-04-28 00:13:44.312481288 +0000 UTC m=+43.342302539" observedRunningTime="2026-04-28 00:13:45.429974733 +0000 UTC m=+44.459795984" watchObservedRunningTime="2026-04-28 00:13:45.688923097 +0000 UTC m=+44.718744308" Apr 28 00:13:45.939417 systemd-networkd[1387]: cali7538a80b9de: Link UP Apr 28 00:13:45.942440 systemd-networkd[1387]: cali7538a80b9de: Gained carrier Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.679 [INFO][3766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.680 [INFO][3766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" iface="eth0" netns="/var/run/netns/cni-39c40441-5050-b124-9782-4f16b9b82b29" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.680 [INFO][3766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" iface="eth0" netns="/var/run/netns/cni-39c40441-5050-b124-9782-4f16b9b82b29" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.680 [INFO][3766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" iface="eth0" netns="/var/run/netns/cni-39c40441-5050-b124-9782-4f16b9b82b29" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.681 [INFO][3766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.681 [INFO][3766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.785 [INFO][3813] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.786 [INFO][3813] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.891 [INFO][3813] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.936 [WARNING][3813] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.936 [INFO][3813] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.949 [INFO][3813] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:45.968060 containerd[1486]: 2026-04-28 00:13:45.954 [INFO][3766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:13:45.972149 containerd[1486]: time="2026-04-28T00:13:45.971613169Z" level=info msg="TearDown network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" successfully" Apr 28 00:13:45.972149 containerd[1486]: time="2026-04-28T00:13:45.971743138Z" level=info msg="StopPodSandbox for \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" returns successfully" Apr 28 00:13:45.975029 systemd[1]: run-netns-cni\x2d39c40441\x2d5050\x2db124\x2d9782\x2d4f16b9b82b29.mount: Deactivated successfully. Apr 28 00:13:45.995252 containerd[1486]: time="2026-04-28T00:13:45.995160184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-j7hjq,Uid:cde2ebe5-98ed-4186-a2bd-c3bd8f220759,Namespace:calico-system,Attempt:1,}" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.699 [INFO][3749] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.701 [INFO][3749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" iface="eth0" netns="/var/run/netns/cni-a4215304-5ec2-67ee-b607-258db60ba616" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.702 [INFO][3749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" iface="eth0" netns="/var/run/netns/cni-a4215304-5ec2-67ee-b607-258db60ba616" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.703 [INFO][3749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" iface="eth0" netns="/var/run/netns/cni-a4215304-5ec2-67ee-b607-258db60ba616" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.703 [INFO][3749] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.703 [INFO][3749] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.818 [INFO][3820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.818 [INFO][3820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.950 [INFO][3820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.983 [WARNING][3820] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.983 [INFO][3820] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.986 [INFO][3820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.003220 containerd[1486]: 2026-04-28 00:13:45.995 [INFO][3749] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:13:46.006992 systemd[1]: run-netns-cni\x2da4215304\x2d5ec2\x2d67ee\x2db607\x2d258db60ba616.mount: Deactivated successfully. Apr 28 00:13:46.012071 containerd[1486]: time="2026-04-28T00:13:46.008234088Z" level=info msg="TearDown network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" successfully" Apr 28 00:13:46.012071 containerd[1486]: time="2026-04-28T00:13:46.008336615Z" level=info msg="StopPodSandbox for \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" returns successfully" Apr 28 00:13:46.015358 containerd[1486]: time="2026-04-28T00:13:46.014805455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db98b7b86-8stzl,Uid:bb42889c-dc68-4341-9c32-e020808fbd20,Namespace:calico-system,Attempt:1,}" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.231 [ERROR][3675] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.264 [INFO][3675] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0 csi-node-driver- calico-system 5fd7a659-0a04-4180-b5ea-79c09071b35c 710 0 2026-04-28 00:13:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6986d7597d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 csi-node-driver-687qr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7538a80b9de [] [] }} ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.264 [INFO][3675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.421 [INFO][3703] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" HandleID="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Workload="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.485 [INFO][3703] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" HandleID="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Workload="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003dcd60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"csi-node-driver-687qr", "timestamp":"2026-04-28 00:13:45.421473616 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000fe000)} Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.486 [INFO][3703] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.486 [INFO][3703] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.486 [INFO][3703] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.494 [INFO][3703] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.528 [INFO][3703] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.569 [INFO][3703] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.582 [INFO][3703] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.652 [INFO][3703] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.652 [INFO][3703] ipam/ipam.go 588: Found unclaimed block in 70.097128ms host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.652 [INFO][3703] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.678 [INFO][3703] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.678 [INFO][3703] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.709 [INFO][3703] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.745 [INFO][3703] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.775 [INFO][3703] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.775 [INFO][3703] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.822 [INFO][3703] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.823 [INFO][3703] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.845 [INFO][3703] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.035251 containerd[1486]: 2026-04-28 00:13:45.845 [INFO][3703] ipam/ipam.go 623: Block '192.168.97.64/26' has 64 free ips which is more than 1 ips required. host="ci-4081-3-7-n-651e172f95" subnet=192.168.97.64/26 Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.845 [INFO][3703] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.854 [INFO][3703] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415 Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.865 [INFO][3703] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.891 [INFO][3703] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.64/26] block=192.168.97.64/26 handle="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.891 [INFO][3703] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.64/26] handle="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.891 [INFO][3703] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.891 [INFO][3703] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.64/26] IPv6=[] ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" HandleID="k8s-pod-network.c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Workload="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.908 [INFO][3675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fd7a659-0a04-4180-b5ea-79c09071b35c", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"csi-node-driver-687qr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7538a80b9de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.909 [INFO][3675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.64/32] ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.909 [INFO][3675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7538a80b9de ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.038146 containerd[1486]: 2026-04-28 00:13:45.944 [INFO][3675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.038517 containerd[1486]: 2026-04-28 00:13:45.947 [INFO][3675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fd7a659-0a04-4180-b5ea-79c09071b35c", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415", Pod:"csi-node-driver-687qr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7538a80b9de", MAC:"2e:94:9c:b8:13:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.038517 containerd[1486]: 2026-04-28 00:13:46.017 [INFO][3675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415" Namespace="calico-system" Pod="csi-node-driver-687qr" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-csi--node--driver--687qr-eth0" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.695 [INFO][3743] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.695 [INFO][3743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" iface="eth0" netns="/var/run/netns/cni-c37f87fa-7ade-171d-7146-63d6cd2e94b7" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.695 [INFO][3743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" iface="eth0" netns="/var/run/netns/cni-c37f87fa-7ade-171d-7146-63d6cd2e94b7" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.696 [INFO][3743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" iface="eth0" netns="/var/run/netns/cni-c37f87fa-7ade-171d-7146-63d6cd2e94b7" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.696 [INFO][3743] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.696 [INFO][3743] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.841 [INFO][3818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.841 [INFO][3818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:45.987 [INFO][3818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:46.022 [WARNING][3818] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:46.022 [INFO][3818] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:46.028 [INFO][3818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.043843 containerd[1486]: 2026-04-28 00:13:46.032 [INFO][3743] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:13:46.045590 containerd[1486]: time="2026-04-28T00:13:46.044782651Z" level=info msg="TearDown network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" successfully" Apr 28 00:13:46.045590 containerd[1486]: time="2026-04-28T00:13:46.044835655Z" level=info msg="StopPodSandbox for \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" returns successfully" Apr 28 00:13:46.049400 containerd[1486]: time="2026-04-28T00:13:46.048901651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ghw74,Uid:26e97b01-fe08-43da-b0e6-c4023295e62a,Namespace:kube-system,Attempt:1,}" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.778 [INFO][3724] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.789 [INFO][3724] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" iface="eth0" netns="/var/run/netns/cni-67dcf3c2-b122-ecc7-279c-8685141d7931" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.790 [INFO][3724] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" iface="eth0" netns="/var/run/netns/cni-67dcf3c2-b122-ecc7-279c-8685141d7931" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.790 [INFO][3724] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" iface="eth0" netns="/var/run/netns/cni-67dcf3c2-b122-ecc7-279c-8685141d7931" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.790 [INFO][3724] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.790 [INFO][3724] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.900 [INFO][3835] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:45.902 [INFO][3835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:46.028 [INFO][3835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:46.095 [WARNING][3835] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:46.095 [INFO][3835] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:46.101 [INFO][3835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.129137 containerd[1486]: 2026-04-28 00:13:46.117 [INFO][3724] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:13:46.171271 containerd[1486]: time="2026-04-28T00:13:46.168587383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:46.171271 containerd[1486]: time="2026-04-28T00:13:46.168691230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:46.171271 containerd[1486]: time="2026-04-28T00:13:46.168722232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:46.171271 containerd[1486]: time="2026-04-28T00:13:46.168817239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.743 [INFO][3773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.744 [INFO][3773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" iface="eth0" netns="/var/run/netns/cni-c7e23bf6-a139-8f6a-cf84-688e55f402e7" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.745 [INFO][3773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" iface="eth0" netns="/var/run/netns/cni-c7e23bf6-a139-8f6a-cf84-688e55f402e7" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.748 [INFO][3773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" iface="eth0" netns="/var/run/netns/cni-c7e23bf6-a139-8f6a-cf84-688e55f402e7" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.748 [INFO][3773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.748 [INFO][3773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.925 [INFO][3828] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:45.928 [INFO][3828] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:46.101 [INFO][3828] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:46.151 [WARNING][3828] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:46.151 [INFO][3828] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:46.159 [INFO][3828] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.181714 containerd[1486]: 2026-04-28 00:13:46.166 [INFO][3773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:13:46.193309 containerd[1486]: time="2026-04-28T00:13:46.192179146Z" level=info msg="TearDown network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" successfully" Apr 28 00:13:46.193309 containerd[1486]: time="2026-04-28T00:13:46.192232110Z" level=info msg="StopPodSandbox for \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" returns successfully" Apr 28 00:13:46.193309 containerd[1486]: time="2026-04-28T00:13:46.192175226Z" level=info msg="TearDown network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" successfully" Apr 28 00:13:46.193309 containerd[1486]: time="2026-04-28T00:13:46.192334357Z" level=info msg="StopPodSandbox for \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" returns successfully" Apr 28 00:13:46.205945 containerd[1486]: time="2026-04-28T00:13:46.205301758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-6h4jt,Uid:c14518d0-875d-4e4a-bf96-4a7030b764e9,Namespace:calico-system,Attempt:1,}" Apr 28 00:13:46.208415 containerd[1486]: time="2026-04-28T00:13:46.207836050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7rl8q,Uid:b726e622-d473-4766-8608-042b869c5024,Namespace:kube-system,Attempt:1,}" Apr 28 00:13:46.283018 systemd[1]: Started cri-containerd-c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415.scope - libcontainer container c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415. Apr 28 00:13:46.402867 containerd[1486]: time="2026-04-28T00:13:46.401826710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-kszc5,Uid:09ed7117-8974-4541-abe8-28a9ca0d9670,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:46.463513 systemd[1]: run-netns-cni\x2dc7e23bf6\x2da139\x2d8f6a\x2dcf84\x2d688e55f402e7.mount: Deactivated successfully. Apr 28 00:13:46.463624 systemd[1]: run-netns-cni\x2d67dcf3c2\x2db122\x2decc7\x2d279c\x2d8685141d7931.mount: Deactivated successfully. Apr 28 00:13:46.463742 systemd[1]: run-netns-cni\x2dc37f87fa\x2d7ade\x2d171d\x2d7146\x2d63d6cd2e94b7.mount: Deactivated successfully. Apr 28 00:13:46.555992 kubelet[2562]: I0428 00:13:46.555160 2562 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-nginx-config\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-nginx-config\") pod \"0057e9c9-3060-413e-8c72-3f0c08b487fd\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " Apr 28 00:13:46.555992 kubelet[2562]: I0428 00:13:46.555225 2562 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0057e9c9-3060-413e-8c72-3f0c08b487fd-kube-api-access-9ns4x\" (UniqueName: \"kubernetes.io/projected/0057e9c9-3060-413e-8c72-3f0c08b487fd-kube-api-access-9ns4x\") pod \"0057e9c9-3060-413e-8c72-3f0c08b487fd\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " Apr 28 00:13:46.555992 kubelet[2562]: I0428 00:13:46.555256 2562 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-backend-key-pair\") pod \"0057e9c9-3060-413e-8c72-3f0c08b487fd\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " Apr 28 00:13:46.559121 kubelet[2562]: I0428 00:13:46.557374 2562 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-ca-bundle\") pod \"0057e9c9-3060-413e-8c72-3f0c08b487fd\" (UID: \"0057e9c9-3060-413e-8c72-3f0c08b487fd\") " Apr 28 00:13:46.559121 kubelet[2562]: I0428 00:13:46.557977 2562 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-ca-bundle" pod "0057e9c9-3060-413e-8c72-3f0c08b487fd" (UID: "0057e9c9-3060-413e-8c72-3f0c08b487fd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:13:46.559121 kubelet[2562]: I0428 00:13:46.558112 2562 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-nginx-config" pod "0057e9c9-3060-413e-8c72-3f0c08b487fd" (UID: "0057e9c9-3060-413e-8c72-3f0c08b487fd"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:13:46.566077 systemd[1]: var-lib-kubelet-pods-0057e9c9\x2d3060\x2d413e\x2d8c72\x2d3f0c08b487fd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 28 00:13:46.572376 kubelet[2562]: I0428 00:13:46.572197 2562 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-backend-key-pair" pod "0057e9c9-3060-413e-8c72-3f0c08b487fd" (UID: "0057e9c9-3060-413e-8c72-3f0c08b487fd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 00:13:46.576384 containerd[1486]: time="2026-04-28T00:13:46.573226196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-687qr,Uid:5fd7a659-0a04-4180-b5ea-79c09071b35c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415\"" Apr 28 00:13:46.574334 systemd[1]: var-lib-kubelet-pods-0057e9c9\x2d3060\x2d413e\x2d8c72\x2d3f0c08b487fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ns4x.mount: Deactivated successfully. Apr 28 00:13:46.581971 kubelet[2562]: I0428 00:13:46.580077 2562 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0057e9c9-3060-413e-8c72-3f0c08b487fd-kube-api-access-9ns4x" pod "0057e9c9-3060-413e-8c72-3f0c08b487fd" (UID: "0057e9c9-3060-413e-8c72-3f0c08b487fd"). InnerVolumeSpecName "kube-api-access-9ns4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 00:13:46.590857 containerd[1486]: time="2026-04-28T00:13:46.590620498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 28 00:13:46.659828 kubelet[2562]: I0428 00:13:46.658901 2562 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-nginx-config\") on node \"ci-4081-3-7-n-651e172f95\" DevicePath \"\"" Apr 28 00:13:46.659828 kubelet[2562]: I0428 00:13:46.658955 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ns4x\" (UniqueName: \"kubernetes.io/projected/0057e9c9-3060-413e-8c72-3f0c08b487fd-kube-api-access-9ns4x\") on node \"ci-4081-3-7-n-651e172f95\" DevicePath \"\"" Apr 28 00:13:46.659828 kubelet[2562]: I0428 00:13:46.658968 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-backend-key-pair\") on node \"ci-4081-3-7-n-651e172f95\" DevicePath \"\"" Apr 28 00:13:46.659828 kubelet[2562]: I0428 00:13:46.658978 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0057e9c9-3060-413e-8c72-3f0c08b487fd-whisker-ca-bundle\") on node \"ci-4081-3-7-n-651e172f95\" DevicePath \"\"" Apr 28 00:13:46.766023 systemd-networkd[1387]: calibb49a003efe: Link UP Apr 28 00:13:46.766305 systemd-networkd[1387]: calibb49a003efe: Gained carrier Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.298 [ERROR][3853] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.346 [INFO][3853] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0 calico-apiserver-748c896d4f- calico-system cde2ebe5-98ed-4186-a2bd-c3bd8f220759 892 0 2026-04-28 00:13:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748c896d4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 calico-apiserver-748c896d4f-j7hjq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibb49a003efe [] [] }} ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.346 [INFO][3853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.542 [INFO][3959] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" HandleID="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.628 [INFO][3959] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" HandleID="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"calico-apiserver-748c896d4f-j7hjq", "timestamp":"2026-04-28 00:13:46.542397821 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400053db80)} Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.629 [INFO][3959] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.631 [INFO][3959] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.631 [INFO][3959] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.641 [INFO][3959] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.665 [INFO][3959] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.689 [INFO][3959] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.697 [INFO][3959] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.706 [INFO][3959] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.706 [INFO][3959] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.709 [INFO][3959] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.720 [INFO][3959] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.731 [INFO][3959] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.66/26] block=192.168.97.64/26 handle="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.731 [INFO][3959] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.66/26] handle="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.731 [INFO][3959] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.816307 containerd[1486]: 2026-04-28 00:13:46.732 [INFO][3959] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.66/26] IPv6=[] ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" HandleID="k8s-pod-network.96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.738 [INFO][3853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"cde2ebe5-98ed-4186-a2bd-c3bd8f220759", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"calico-apiserver-748c896d4f-j7hjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibb49a003efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.738 [INFO][3853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.66/32] ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.741 [INFO][3853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb49a003efe ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.763 [INFO][3853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.768 [INFO][3853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"cde2ebe5-98ed-4186-a2bd-c3bd8f220759", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd", Pod:"calico-apiserver-748c896d4f-j7hjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibb49a003efe", MAC:"82:15:e8:e5:bc:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.817103 containerd[1486]: 2026-04-28 00:13:46.813 [INFO][3853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-j7hjq" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:13:46.861650 containerd[1486]: time="2026-04-28T00:13:46.861175000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:46.861650 containerd[1486]: time="2026-04-28T00:13:46.861420097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:46.861650 containerd[1486]: time="2026-04-28T00:13:46.861435938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:46.862397 containerd[1486]: time="2026-04-28T00:13:46.861925531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:46.879812 systemd-networkd[1387]: cali2557e6eeab5: Link UP Apr 28 00:13:46.882015 systemd-networkd[1387]: cali2557e6eeab5: Gained carrier Apr 28 00:13:46.926002 systemd[1]: Started cri-containerd-96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd.scope - libcontainer container 96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd. Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.237 [ERROR][3895] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.290 [INFO][3895] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0 coredns-7d764666f9- kube-system 26e97b01-fe08-43da-b0e6-c4023295e62a 894 0 2026-04-28 00:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 coredns-7d764666f9-ghw74 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2557e6eeab5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.290 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.547 [INFO][3929] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" HandleID="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.641 [INFO][3929] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" HandleID="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037c570), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"coredns-7d764666f9-ghw74", "timestamp":"2026-04-28 00:13:46.54708238 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000f4dc0)} Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.641 [INFO][3929] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.731 [INFO][3929] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.733 [INFO][3929] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.742 [INFO][3929] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.772 [INFO][3929] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.810 [INFO][3929] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.827 [INFO][3929] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.829 [INFO][3929] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.830 [INFO][3929] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.833 [INFO][3929] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53 Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.843 [INFO][3929] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.857 [INFO][3929] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.67/26] block=192.168.97.64/26 handle="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.858 [INFO][3929] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.67/26] handle="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.859 [INFO][3929] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:46.945737 containerd[1486]: 2026-04-28 00:13:46.859 [INFO][3929] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.67/26] IPv6=[] ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" HandleID="k8s-pod-network.68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.946396 containerd[1486]: 2026-04-28 00:13:46.868 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"26e97b01-fe08-43da-b0e6-c4023295e62a", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"coredns-7d764666f9-ghw74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2557e6eeab5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.946396 containerd[1486]: 2026-04-28 00:13:46.868 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.67/32] ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.946396 containerd[1486]: 2026-04-28 00:13:46.868 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2557e6eeab5 ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.946396 containerd[1486]: 2026-04-28 00:13:46.901 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.946396 containerd[1486]: 2026-04-28 00:13:46.902 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"26e97b01-fe08-43da-b0e6-c4023295e62a", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53", Pod:"coredns-7d764666f9-ghw74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2557e6eeab5", MAC:"de:b0:e7:33:9d:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:46.946597 containerd[1486]: 2026-04-28 00:13:46.931 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53" Namespace="kube-system" Pod="coredns-7d764666f9-ghw74" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:13:46.991971 systemd-networkd[1387]: cali1eaf93cae85: Link UP Apr 28 00:13:46.992275 systemd-networkd[1387]: cali1eaf93cae85: Gained carrier Apr 28 00:13:47.037859 containerd[1486]: time="2026-04-28T00:13:47.037183361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:47.037859 containerd[1486]: time="2026-04-28T00:13:47.037267686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:47.037859 containerd[1486]: time="2026-04-28T00:13:47.037285647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.039351 containerd[1486]: time="2026-04-28T00:13:47.037838884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.247 [ERROR][3868] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.288 [INFO][3868] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0 calico-kube-controllers-7db98b7b86- calico-system bb42889c-dc68-4341-9c32-e020808fbd20 893 0 2026-04-28 00:13:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db98b7b86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 calico-kube-controllers-7db98b7b86-8stzl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1eaf93cae85 [] [] }} ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.288 [INFO][3868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.599 [INFO][3928] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" HandleID="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.650 [INFO][3928] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" HandleID="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"calico-kube-controllers-7db98b7b86-8stzl", "timestamp":"2026-04-28 00:13:46.599586707 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000e8580)} Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.651 [INFO][3928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.859 [INFO][3928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.864 [INFO][3928] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.873 [INFO][3928] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.903 [INFO][3928] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.917 [INFO][3928] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.925 [INFO][3928] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.935 [INFO][3928] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.935 [INFO][3928] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.939 [INFO][3928] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42 Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.955 [INFO][3928] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.971 [INFO][3928] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.68/26] block=192.168.97.64/26 handle="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.971 [INFO][3928] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.68/26] handle="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.971 [INFO][3928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:47.046459 containerd[1486]: 2026-04-28 00:13:46.971 [INFO][3928] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.68/26] IPv6=[] ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" HandleID="k8s-pod-network.5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:46.977 [INFO][3868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0", GenerateName:"calico-kube-controllers-7db98b7b86-", Namespace:"calico-system", SelfLink:"", UID:"bb42889c-dc68-4341-9c32-e020808fbd20", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db98b7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"calico-kube-controllers-7db98b7b86-8stzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eaf93cae85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:46.978 [INFO][3868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.68/32] ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:46.979 [INFO][3868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eaf93cae85 ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:46.993 [INFO][3868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:46.995 [INFO][3868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0", GenerateName:"calico-kube-controllers-7db98b7b86-", Namespace:"calico-system", SelfLink:"", UID:"bb42889c-dc68-4341-9c32-e020808fbd20", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db98b7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42", Pod:"calico-kube-controllers-7db98b7b86-8stzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eaf93cae85", MAC:"1a:fd:a2:06:d4:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.047163 containerd[1486]: 2026-04-28 00:13:47.031 [INFO][3868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42" Namespace="calico-system" Pod="calico-kube-controllers-7db98b7b86-8stzl" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:13:47.096027 systemd[1]: Started cri-containerd-68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53.scope - libcontainer container 68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53. Apr 28 00:13:47.099321 containerd[1486]: time="2026-04-28T00:13:47.099020106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-j7hjq,Uid:cde2ebe5-98ed-4186-a2bd-c3bd8f220759,Namespace:calico-system,Attempt:1,} returns sandbox id \"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd\"" Apr 28 00:13:47.117777 systemd-networkd[1387]: cali619a13a1a73: Link UP Apr 28 00:13:47.119339 systemd-networkd[1387]: cali619a13a1a73: Gained carrier Apr 28 00:13:47.141240 containerd[1486]: time="2026-04-28T00:13:47.140424667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:47.141240 containerd[1486]: time="2026-04-28T00:13:47.140491632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:47.141240 containerd[1486]: time="2026-04-28T00:13:47.140508313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.141240 containerd[1486]: time="2026-04-28T00:13:47.140628161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.162529 systemd[1]: Removed slice kubepods-besteffort-pod0057e9c9_3060_413e_8c72_3f0c08b487fd.slice - libcontainer container kubepods-besteffort-pod0057e9c9_3060_413e_8c72_3f0c08b487fd.slice. Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.477 [ERROR][3937] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.526 [INFO][3937] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0 coredns-7d764666f9- kube-system b726e622-d473-4766-8608-042b869c5024 897 0 2026-04-28 00:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 coredns-7d764666f9-7rl8q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali619a13a1a73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.526 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.683 [INFO][4004] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" HandleID="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.712 [INFO][4004] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" HandleID="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003734e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"coredns-7d764666f9-7rl8q", "timestamp":"2026-04-28 00:13:46.683957119 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000188000)} Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.712 [INFO][4004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.972 [INFO][4004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.972 [INFO][4004] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:46.979 [INFO][4004] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.005 [INFO][4004] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.028 [INFO][4004] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.033 [INFO][4004] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.043 [INFO][4004] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.043 [INFO][4004] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.053 [INFO][4004] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.071 [INFO][4004] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.094 [INFO][4004] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.69/26] block=192.168.97.64/26 handle="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.094 [INFO][4004] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.69/26] handle="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.094 [INFO][4004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:47.185776 containerd[1486]: 2026-04-28 00:13:47.094 [INFO][4004] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.69/26] IPv6=[] ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" HandleID="k8s-pod-network.0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.186418 containerd[1486]: 2026-04-28 00:13:47.105 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b726e622-d473-4766-8608-042b869c5024", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"coredns-7d764666f9-7rl8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619a13a1a73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.186418 containerd[1486]: 2026-04-28 00:13:47.108 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.69/32] ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.186418 containerd[1486]: 2026-04-28 00:13:47.108 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali619a13a1a73 ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.186418 containerd[1486]: 2026-04-28 00:13:47.120 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.186418 containerd[1486]: 2026-04-28 00:13:47.120 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b726e622-d473-4766-8608-042b869c5024", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa", Pod:"coredns-7d764666f9-7rl8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619a13a1a73", MAC:"ba:fe:04:33:a1:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.186585 containerd[1486]: 2026-04-28 00:13:47.181 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa" Namespace="kube-system" Pod="coredns-7d764666f9-7rl8q" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:13:47.241621 containerd[1486]: time="2026-04-28T00:13:47.239795240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ghw74,Uid:26e97b01-fe08-43da-b0e6-c4023295e62a,Namespace:kube-system,Attempt:1,} returns sandbox id \"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53\"" Apr 28 00:13:47.240967 systemd[1]: Started cri-containerd-5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42.scope - libcontainer container 5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42. Apr 28 00:13:47.254872 containerd[1486]: time="2026-04-28T00:13:47.254189946Z" level=info msg="CreateContainer within sandbox \"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:13:47.263761 containerd[1486]: time="2026-04-28T00:13:47.261153244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:47.263761 containerd[1486]: time="2026-04-28T00:13:47.261293533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:47.263761 containerd[1486]: time="2026-04-28T00:13:47.261325735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.263761 containerd[1486]: time="2026-04-28T00:13:47.261519308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.293143 systemd[1]: Started cri-containerd-0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa.scope - libcontainer container 0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa. Apr 28 00:13:47.302902 systemd-networkd[1387]: cali6dd00e6370e: Link UP Apr 28 00:13:47.304705 systemd-networkd[1387]: cali6dd00e6370e: Gained carrier Apr 28 00:13:47.337303 containerd[1486]: time="2026-04-28T00:13:47.337233005Z" level=info msg="CreateContainer within sandbox \"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4b220e47f433bfb13df9b7aeeef93f2c39705e85bf9aaaff364e354d53f660e\"" Apr 28 00:13:47.372975 containerd[1486]: time="2026-04-28T00:13:47.372731099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db98b7b86-8stzl,Uid:bb42889c-dc68-4341-9c32-e020808fbd20,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42\"" Apr 28 00:13:47.381178 containerd[1486]: time="2026-04-28T00:13:47.380946239Z" level=info msg="StartContainer for \"f4b220e47f433bfb13df9b7aeeef93f2c39705e85bf9aaaff364e354d53f660e\"" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.650 [ERROR][3967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.687 [INFO][3967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0 calico-apiserver-748c896d4f- calico-system 09ed7117-8974-4541-abe8-28a9ca0d9670 881 0 2026-04-28 00:13:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748c896d4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 calico-apiserver-748c896d4f-kszc5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6dd00e6370e [] [] }} ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.687 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.804 [INFO][4033] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" HandleID="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.828 [INFO][4033] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" HandleID="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004e120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"calico-apiserver-748c896d4f-kszc5", "timestamp":"2026-04-28 00:13:46.804140485 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000470000)} Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:46.828 [INFO][4033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.095 [INFO][4033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.096 [INFO][4033] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.106 [INFO][4033] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.145 [INFO][4033] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.164 [INFO][4033] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.178 [INFO][4033] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.192 [INFO][4033] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.192 [INFO][4033] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.196 [INFO][4033] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.209 [INFO][4033] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.226 [INFO][4033] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.70/26] block=192.168.97.64/26 handle="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.226 [INFO][4033] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.70/26] handle="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.226 [INFO][4033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:47.396815 containerd[1486]: 2026-04-28 00:13:47.227 [INFO][4033] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.70/26] IPv6=[] ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" HandleID="k8s-pod-network.867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.251 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"09ed7117-8974-4541-abe8-28a9ca0d9670", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"calico-apiserver-748c896d4f-kszc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6dd00e6370e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.251 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.70/32] ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.251 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dd00e6370e ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.308 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.315 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"09ed7117-8974-4541-abe8-28a9ca0d9670", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad", Pod:"calico-apiserver-748c896d4f-kszc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6dd00e6370e", MAC:"fe:81:9f:f0:9f:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.398129 containerd[1486]: 2026-04-28 00:13:47.375 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad" Namespace="calico-system" Pod="calico-apiserver-748c896d4f-kszc5" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--kszc5-eth0" Apr 28 00:13:47.434879 systemd-networkd[1387]: cali04c194e7082: Link UP Apr 28 00:13:47.435442 systemd-networkd[1387]: cali04c194e7082: Gained carrier Apr 28 00:13:47.500258 systemd[1]: Started cri-containerd-f4b220e47f433bfb13df9b7aeeef93f2c39705e85bf9aaaff364e354d53f660e.scope - libcontainer container f4b220e47f433bfb13df9b7aeeef93f2c39705e85bf9aaaff364e354d53f660e. Apr 28 00:13:47.519598 containerd[1486]: time="2026-04-28T00:13:47.519139523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7rl8q,Uid:b726e622-d473-4766-8608-042b869c5024,Namespace:kube-system,Attempt:1,} returns sandbox id \"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa\"" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.553 [ERROR][3938] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.657 [INFO][3938] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0 goldmane-7fb6cdc5d9- calico-system c14518d0-875d-4e4a-bf96-4a7030b764e9 896 0 2026-04-28 00:13:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7fb6cdc5d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 goldmane-7fb6cdc5d9-6h4jt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali04c194e7082 [] [] }} ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.657 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.796 [INFO][4028] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" HandleID="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.831 [INFO][4028] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" HandleID="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004fbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"goldmane-7fb6cdc5d9-6h4jt", "timestamp":"2026-04-28 00:13:46.796399519 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005de580)} Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:46.831 [INFO][4028] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.227 [INFO][4028] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.228 [INFO][4028] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.239 [INFO][4028] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.260 [INFO][4028] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.281 [INFO][4028] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.299 [INFO][4028] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.313 [INFO][4028] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.313 [INFO][4028] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.323 [INFO][4028] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8 Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.334 [INFO][4028] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.383 [INFO][4028] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.71/26] block=192.168.97.64/26 handle="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.383 [INFO][4028] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.71/26] handle="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.383 [INFO][4028] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:47.527180 containerd[1486]: 2026-04-28 00:13:47.383 [INFO][4028] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.71/26] IPv6=[] ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" HandleID="k8s-pod-network.7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.415 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"c14518d0-875d-4e4a-bf96-4a7030b764e9", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"goldmane-7fb6cdc5d9-6h4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c194e7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.416 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.71/32] ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.416 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04c194e7082 ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.443 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.457 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"c14518d0-875d-4e4a-bf96-4a7030b764e9", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8", Pod:"goldmane-7fb6cdc5d9-6h4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c194e7082", MAC:"46:8f:30:ed:24:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:47.527823 containerd[1486]: 2026-04-28 00:13:47.499 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-6h4jt" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:13:47.539923 containerd[1486]: time="2026-04-28T00:13:47.537558654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:47.539923 containerd[1486]: time="2026-04-28T00:13:47.537627578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:47.539923 containerd[1486]: time="2026-04-28T00:13:47.537643779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.539923 containerd[1486]: time="2026-04-28T00:13:47.537831672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.566132 containerd[1486]: time="2026-04-28T00:13:47.565364642Z" level=info msg="CreateContainer within sandbox \"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:13:47.621756 containerd[1486]: time="2026-04-28T00:13:47.619531963Z" level=info msg="CreateContainer within sandbox \"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"897722bdaf91af641fb9bee34f93853d86e01a9644334e5bcb1d2361de43cf89\"" Apr 28 00:13:47.622903 containerd[1486]: time="2026-04-28T00:13:47.622682650Z" level=info msg="StartContainer for \"897722bdaf91af641fb9bee34f93853d86e01a9644334e5bcb1d2361de43cf89\"" Apr 28 00:13:47.657475 containerd[1486]: time="2026-04-28T00:13:47.655799947Z" level=info msg="StartContainer for \"f4b220e47f433bfb13df9b7aeeef93f2c39705e85bf9aaaff364e354d53f660e\" returns successfully" Apr 28 00:13:47.682977 systemd[1]: Started cri-containerd-867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad.scope - libcontainer container 867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad. Apr 28 00:13:47.685838 systemd[1]: Created slice kubepods-besteffort-podc7e16260_72bf_4806_886f_6b2142b0525b.slice - libcontainer container kubepods-besteffort-podc7e16260_72bf_4806_886f_6b2142b0525b.slice. Apr 28 00:13:47.702588 systemd-networkd[1387]: cali7538a80b9de: Gained IPv6LL Apr 28 00:13:47.711692 containerd[1486]: time="2026-04-28T00:13:47.709771855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:47.711692 containerd[1486]: time="2026-04-28T00:13:47.709862581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:47.711692 containerd[1486]: time="2026-04-28T00:13:47.709887062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.711692 containerd[1486]: time="2026-04-28T00:13:47.709998910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:47.712562 systemd[1]: Started cri-containerd-897722bdaf91af641fb9bee34f93853d86e01a9644334e5bcb1d2361de43cf89.scope - libcontainer container 897722bdaf91af641fb9bee34f93853d86e01a9644334e5bcb1d2361de43cf89. Apr 28 00:13:47.754495 systemd[1]: Started cri-containerd-7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8.scope - libcontainer container 7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8. Apr 28 00:13:47.773147 kubelet[2562]: I0428 00:13:47.773037 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlh4r\" (UniqueName: \"kubernetes.io/projected/c7e16260-72bf-4806-886f-6b2142b0525b-kube-api-access-jlh4r\") pod \"whisker-668955cff-wfhj9\" (UID: \"c7e16260-72bf-4806-886f-6b2142b0525b\") " pod="calico-system/whisker-668955cff-wfhj9" Apr 28 00:13:47.773147 kubelet[2562]: I0428 00:13:47.773098 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7e16260-72bf-4806-886f-6b2142b0525b-whisker-backend-key-pair\") pod \"whisker-668955cff-wfhj9\" (UID: \"c7e16260-72bf-4806-886f-6b2142b0525b\") " pod="calico-system/whisker-668955cff-wfhj9" Apr 28 00:13:47.773147 kubelet[2562]: I0428 00:13:47.773118 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e16260-72bf-4806-886f-6b2142b0525b-whisker-ca-bundle\") pod \"whisker-668955cff-wfhj9\" (UID: \"c7e16260-72bf-4806-886f-6b2142b0525b\") " pod="calico-system/whisker-668955cff-wfhj9" Apr 28 00:13:47.773147 kubelet[2562]: I0428 00:13:47.773144 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c7e16260-72bf-4806-886f-6b2142b0525b-nginx-config\") pod \"whisker-668955cff-wfhj9\" (UID: \"c7e16260-72bf-4806-886f-6b2142b0525b\") " pod="calico-system/whisker-668955cff-wfhj9" Apr 28 00:13:47.785805 containerd[1486]: time="2026-04-28T00:13:47.785681485Z" level=info msg="StartContainer for \"897722bdaf91af641fb9bee34f93853d86e01a9644334e5bcb1d2361de43cf89\" returns successfully" Apr 28 00:13:47.969312 containerd[1486]: time="2026-04-28T00:13:47.969258473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-6h4jt,Uid:c14518d0-875d-4e4a-bf96-4a7030b764e9,Namespace:calico-system,Attempt:1,} returns sandbox id \"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8\"" Apr 28 00:13:47.977213 containerd[1486]: time="2026-04-28T00:13:47.977154992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748c896d4f-kszc5,Uid:09ed7117-8974-4541-abe8-28a9ca0d9670,Namespace:calico-system,Attempt:0,} returns sandbox id \"867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad\"" Apr 28 00:13:48.002895 containerd[1486]: time="2026-04-28T00:13:48.002612183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-668955cff-wfhj9,Uid:c7e16260-72bf-4806-886f-6b2142b0525b,Namespace:calico-system,Attempt:0,}" Apr 28 00:13:48.223396 systemd-networkd[1387]: caliadd47a3de4a: Link UP Apr 28 00:13:48.223873 systemd-networkd[1387]: caliadd47a3de4a: Gained carrier Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.055 [ERROR][4464] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.085 [INFO][4464] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0 whisker-668955cff- calico-system c7e16260-72bf-4806-886f-6b2142b0525b 960 0 2026-04-28 00:13:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:668955cff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-n-651e172f95 whisker-668955cff-wfhj9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliadd47a3de4a [] [] }} ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.085 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.122 [INFO][4508] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" HandleID="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.149 [INFO][4508] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" HandleID="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-n-651e172f95", "pod":"whisker-668955cff-wfhj9", "timestamp":"2026-04-28 00:13:48.12240409 +0000 UTC"}, Hostname:"ci-4081-3-7-n-651e172f95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004138c0)} Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.149 [INFO][4508] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.149 [INFO][4508] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.149 [INFO][4508] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-n-651e172f95' Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.155 [INFO][4508] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.167 [INFO][4508] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.175 [INFO][4508] ipam/ipam.go 526: Trying affinity for 192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.179 [INFO][4508] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.189 [INFO][4508] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.189 [INFO][4508] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.194 [INFO][4508] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868 Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.204 [INFO][4508] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.216 [INFO][4508] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.72/26] block=192.168.97.64/26 handle="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.217 [INFO][4508] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.72/26] handle="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" host="ci-4081-3-7-n-651e172f95" Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.217 [INFO][4508] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:13:48.252164 containerd[1486]: 2026-04-28 00:13:48.217 [INFO][4508] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.72/26] IPv6=[] ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" HandleID="k8s-pod-network.68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Workload="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.219 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0", GenerateName:"whisker-668955cff-", Namespace:"calico-system", SelfLink:"", UID:"c7e16260-72bf-4806-886f-6b2142b0525b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"668955cff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"", Pod:"whisker-668955cff-wfhj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliadd47a3de4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.219 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.72/32] ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.219 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadd47a3de4a ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.222 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.223 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0", GenerateName:"whisker-668955cff-", Namespace:"calico-system", SelfLink:"", UID:"c7e16260-72bf-4806-886f-6b2142b0525b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"668955cff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868", Pod:"whisker-668955cff-wfhj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliadd47a3de4a", MAC:"b2:95:d4:98:8a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:13:48.253209 containerd[1486]: 2026-04-28 00:13:48.247 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868" Namespace="calico-system" Pod="whisker-668955cff-wfhj9" WorkloadEndpoint="ci--4081--3--7--n--651e172f95-k8s-whisker--668955cff--wfhj9-eth0" Apr 28 00:13:48.310425 containerd[1486]: time="2026-04-28T00:13:48.308401412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:13:48.310425 containerd[1486]: time="2026-04-28T00:13:48.308504219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:13:48.310425 containerd[1486]: time="2026-04-28T00:13:48.308523340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:48.311729 containerd[1486]: time="2026-04-28T00:13:48.308668149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:13:48.346980 systemd[1]: Started cri-containerd-68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868.scope - libcontainer container 68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868. Apr 28 00:13:48.407229 systemd-networkd[1387]: cali2557e6eeab5: Gained IPv6LL Apr 28 00:13:48.423042 containerd[1486]: time="2026-04-28T00:13:48.422984108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-668955cff-wfhj9,Uid:c7e16260-72bf-4806-886f-6b2142b0525b,Namespace:calico-system,Attempt:0,} returns sandbox id \"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868\"" Apr 28 00:13:48.471413 systemd-networkd[1387]: cali6dd00e6370e: Gained IPv6LL Apr 28 00:13:48.471968 systemd-networkd[1387]: calibb49a003efe: Gained IPv6LL Apr 28 00:13:48.569652 kubelet[2562]: I0428 00:13:48.568218 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-7rl8q" podStartSLOduration=42.568197233 podStartE2EDuration="42.568197233s" podCreationTimestamp="2026-04-28 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:48.566759262 +0000 UTC m=+47.596580673" watchObservedRunningTime="2026-04-28 00:13:48.568197233 +0000 UTC m=+47.598018484" Apr 28 00:13:48.593628 kubelet[2562]: I0428 00:13:48.593431 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ghw74" podStartSLOduration=42.593404878 podStartE2EDuration="42.593404878s" podCreationTimestamp="2026-04-28 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:13:48.587476141 +0000 UTC m=+47.617297392" watchObservedRunningTime="2026-04-28 00:13:48.593404878 +0000 UTC m=+47.623226129" Apr 28 00:13:48.788820 kernel: calico-node[4478]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 28 00:13:48.790045 systemd-networkd[1387]: cali1eaf93cae85: Gained IPv6LL Apr 28 00:13:48.982629 systemd-networkd[1387]: cali619a13a1a73: Gained IPv6LL Apr 28 00:13:48.983047 systemd-networkd[1387]: cali04c194e7082: Gained IPv6LL Apr 28 00:13:49.122034 kubelet[2562]: I0428 00:13:49.120951 2562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0057e9c9-3060-413e-8c72-3f0c08b487fd" path="/var/lib/kubelet/pods/0057e9c9-3060-413e-8c72-3f0c08b487fd/volumes" Apr 28 00:13:49.290873 systemd-networkd[1387]: vxlan.calico: Link UP Apr 28 00:13:49.290888 systemd-networkd[1387]: vxlan.calico: Gained carrier Apr 28 00:13:49.813934 systemd-networkd[1387]: caliadd47a3de4a: Gained IPv6LL Apr 28 00:13:50.105386 containerd[1486]: time="2026-04-28T00:13:50.104390155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:50.105386 containerd[1486]: time="2026-04-28T00:13:50.105235926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=7895994" Apr 28 00:13:50.107975 containerd[1486]: time="2026-04-28T00:13:50.107837561Z" level=info msg="ImageCreate event name:\"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:50.113149 containerd[1486]: time="2026-04-28T00:13:50.112995790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:50.113902 containerd[1486]: time="2026-04-28T00:13:50.113687272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"10471633\" in 3.522902523s" Apr 28 00:13:50.113902 containerd[1486]: time="2026-04-28T00:13:50.113738795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\"" Apr 28 00:13:50.115702 containerd[1486]: time="2026-04-28T00:13:50.115481299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 00:13:50.123684 containerd[1486]: time="2026-04-28T00:13:50.123623307Z" level=info msg="CreateContainer within sandbox \"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 28 00:13:50.150509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183312886.mount: Deactivated successfully. Apr 28 00:13:50.157094 containerd[1486]: time="2026-04-28T00:13:50.157040749Z" level=info msg="CreateContainer within sandbox \"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"53a882e8e9af9689530eb721ac00e50f3290741fad90d6f3f3a8227c9a46e974\"" Apr 28 00:13:50.160138 containerd[1486]: time="2026-04-28T00:13:50.159833076Z" level=info msg="StartContainer for \"53a882e8e9af9689530eb721ac00e50f3290741fad90d6f3f3a8227c9a46e974\"" Apr 28 00:13:50.207988 systemd[1]: Started cri-containerd-53a882e8e9af9689530eb721ac00e50f3290741fad90d6f3f3a8227c9a46e974.scope - libcontainer container 53a882e8e9af9689530eb721ac00e50f3290741fad90d6f3f3a8227c9a46e974. Apr 28 00:13:50.249034 containerd[1486]: time="2026-04-28T00:13:50.248966817Z" level=info msg="StartContainer for \"53a882e8e9af9689530eb721ac00e50f3290741fad90d6f3f3a8227c9a46e974\" returns successfully" Apr 28 00:13:51.222635 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Apr 28 00:13:52.985903 containerd[1486]: time="2026-04-28T00:13:52.985022702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:52.987040 containerd[1486]: time="2026-04-28T00:13:52.986949571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=42617669" Apr 28 00:13:52.988684 containerd[1486]: time="2026-04-28T00:13:52.987548965Z" level=info msg="ImageCreate event name:\"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:52.991304 containerd[1486]: time="2026-04-28T00:13:52.990783828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:52.992039 containerd[1486]: time="2026-04-28T00:13:52.991956054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 2.876418512s" Apr 28 00:13:52.992039 containerd[1486]: time="2026-04-28T00:13:52.992011858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 28 00:13:52.994409 containerd[1486]: time="2026-04-28T00:13:52.994356310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 28 00:13:53.000682 containerd[1486]: time="2026-04-28T00:13:53.000622705Z" level=info msg="CreateContainer within sandbox \"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 00:13:53.023395 containerd[1486]: time="2026-04-28T00:13:53.022357303Z" level=info msg="CreateContainer within sandbox \"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5609428de726436852dbf2b9b00004b132ec3e19bab3b6e42ded265444281f0c\"" Apr 28 00:13:53.024229 containerd[1486]: time="2026-04-28T00:13:53.024162402Z" level=info msg="StartContainer for \"5609428de726436852dbf2b9b00004b132ec3e19bab3b6e42ded265444281f0c\"" Apr 28 00:13:53.073977 systemd[1]: Started cri-containerd-5609428de726436852dbf2b9b00004b132ec3e19bab3b6e42ded265444281f0c.scope - libcontainer container 5609428de726436852dbf2b9b00004b132ec3e19bab3b6e42ded265444281f0c. Apr 28 00:13:53.123455 containerd[1486]: time="2026-04-28T00:13:53.123372430Z" level=info msg="StartContainer for \"5609428de726436852dbf2b9b00004b132ec3e19bab3b6e42ded265444281f0c\" returns successfully" Apr 28 00:13:54.610758 kubelet[2562]: I0428 00:13:54.610205 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:13:56.731140 containerd[1486]: time="2026-04-28T00:13:56.731047993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:56.733205 containerd[1486]: time="2026-04-28T00:13:56.733105418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=46169343" Apr 28 00:13:56.734768 containerd[1486]: time="2026-04-28T00:13:56.734716901Z" level=info msg="ImageCreate event name:\"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:56.738375 containerd[1486]: time="2026-04-28T00:13:56.738280723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:56.740039 containerd[1486]: time="2026-04-28T00:13:56.739858724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"48744950\" in 3.745215637s" Apr 28 00:13:56.740039 containerd[1486]: time="2026-04-28T00:13:56.739914287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\"" Apr 28 00:13:56.742404 containerd[1486]: time="2026-04-28T00:13:56.741723059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 28 00:13:56.765236 containerd[1486]: time="2026-04-28T00:13:56.765179859Z" level=info msg="CreateContainer within sandbox \"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 28 00:13:56.793785 containerd[1486]: time="2026-04-28T00:13:56.793633074Z" level=info msg="CreateContainer within sandbox \"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e134d9e31f2bb9fc5d12e7cb2419f6def13cdfe91ebb9ae8b4465ab61030517a\"" Apr 28 00:13:56.796521 containerd[1486]: time="2026-04-28T00:13:56.796215047Z" level=info msg="StartContainer for \"e134d9e31f2bb9fc5d12e7cb2419f6def13cdfe91ebb9ae8b4465ab61030517a\"" Apr 28 00:13:56.856248 systemd[1]: Started cri-containerd-e134d9e31f2bb9fc5d12e7cb2419f6def13cdfe91ebb9ae8b4465ab61030517a.scope - libcontainer container e134d9e31f2bb9fc5d12e7cb2419f6def13cdfe91ebb9ae8b4465ab61030517a. Apr 28 00:13:56.902739 containerd[1486]: time="2026-04-28T00:13:56.902496843Z" level=info msg="StartContainer for \"e134d9e31f2bb9fc5d12e7cb2419f6def13cdfe91ebb9ae8b4465ab61030517a\" returns successfully" Apr 28 00:13:57.650086 kubelet[2562]: I0428 00:13:57.648594 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-748c896d4f-j7hjq" podStartSLOduration=29.781894775 podStartE2EDuration="35.648578178s" podCreationTimestamp="2026-04-28 00:13:22 +0000 UTC" firstStartedPulling="2026-04-28 00:13:47.127441814 +0000 UTC m=+46.157263065" lastFinishedPulling="2026-04-28 00:13:52.994125217 +0000 UTC m=+52.023946468" observedRunningTime="2026-04-28 00:13:53.625799678 +0000 UTC m=+52.655620929" watchObservedRunningTime="2026-04-28 00:13:57.648578178 +0000 UTC m=+56.678399430" Apr 28 00:13:57.698160 kubelet[2562]: I0428 00:13:57.697964 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7db98b7b86-8stzl" podStartSLOduration=25.347666834 podStartE2EDuration="34.697916165s" podCreationTimestamp="2026-04-28 00:13:23 +0000 UTC" firstStartedPulling="2026-04-28 00:13:47.391243316 +0000 UTC m=+46.421064567" lastFinishedPulling="2026-04-28 00:13:56.741492647 +0000 UTC m=+55.771313898" observedRunningTime="2026-04-28 00:13:57.651766938 +0000 UTC m=+56.681588189" watchObservedRunningTime="2026-04-28 00:13:57.697916165 +0000 UTC m=+56.727737536" Apr 28 00:13:59.140004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825958532.mount: Deactivated successfully. Apr 28 00:13:59.526229 containerd[1486]: time="2026-04-28T00:13:59.526029295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:59.527993 containerd[1486]: time="2026-04-28T00:13:59.527898785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=48513326" Apr 28 00:13:59.530197 containerd[1486]: time="2026-04-28T00:13:59.529112323Z" level=info msg="ImageCreate event name:\"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:59.532715 containerd[1486]: time="2026-04-28T00:13:59.532635772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:59.533902 containerd[1486]: time="2026-04-28T00:13:59.533852390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"48513172\" in 2.792069408s" Apr 28 00:13:59.534003 containerd[1486]: time="2026-04-28T00:13:59.533907793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\"" Apr 28 00:13:59.536574 containerd[1486]: time="2026-04-28T00:13:59.536475916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 28 00:13:59.542792 containerd[1486]: time="2026-04-28T00:13:59.542738736Z" level=info msg="CreateContainer within sandbox \"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 28 00:13:59.565829 containerd[1486]: time="2026-04-28T00:13:59.565761038Z" level=info msg="CreateContainer within sandbox \"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81\"" Apr 28 00:13:59.569783 containerd[1486]: time="2026-04-28T00:13:59.569034435Z" level=info msg="StartContainer for \"0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81\"" Apr 28 00:13:59.659962 systemd[1]: Started cri-containerd-0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81.scope - libcontainer container 0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81. Apr 28 00:13:59.727889 containerd[1486]: time="2026-04-28T00:13:59.727812240Z" level=info msg="StartContainer for \"0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81\" returns successfully" Apr 28 00:13:59.954227 containerd[1486]: time="2026-04-28T00:13:59.953072028Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:13:59.954227 containerd[1486]: time="2026-04-28T00:13:59.954172441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 28 00:13:59.958777 containerd[1486]: time="2026-04-28T00:13:59.958702458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 422.133858ms" Apr 28 00:13:59.959034 containerd[1486]: time="2026-04-28T00:13:59.959012913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 28 00:13:59.960733 containerd[1486]: time="2026-04-28T00:13:59.960637151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 28 00:13:59.968943 containerd[1486]: time="2026-04-28T00:13:59.968897466Z" level=info msg="CreateContainer within sandbox \"867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 28 00:13:59.990018 containerd[1486]: time="2026-04-28T00:13:59.988837221Z" level=info msg="CreateContainer within sandbox \"867d55eff19ef939a861d0672007b7df9b8c1d9a3b7e9321cc269c94a0ba3cad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc7f337084d94892d4a1690e71b1b06a6b6d56a78b16e9af90cd8fb3703b4baf\"" Apr 28 00:13:59.990638 containerd[1486]: time="2026-04-28T00:13:59.990589745Z" level=info msg="StartContainer for \"dc7f337084d94892d4a1690e71b1b06a6b6d56a78b16e9af90cd8fb3703b4baf\"" Apr 28 00:14:00.049028 systemd[1]: Started cri-containerd-dc7f337084d94892d4a1690e71b1b06a6b6d56a78b16e9af90cd8fb3703b4baf.scope - libcontainer container dc7f337084d94892d4a1690e71b1b06a6b6d56a78b16e9af90cd8fb3703b4baf. Apr 28 00:14:00.099704 containerd[1486]: time="2026-04-28T00:14:00.099456305Z" level=info msg="StartContainer for \"dc7f337084d94892d4a1690e71b1b06a6b6d56a78b16e9af90cd8fb3703b4baf\" returns successfully" Apr 28 00:14:00.701517 systemd[1]: run-containerd-runc-k8s.io-0992ef352f87b6a4f05d421287943ea08b6120896e5d0d9c87d5ad24d47b3f81-runc.FhUtTJ.mount: Deactivated successfully. Apr 28 00:14:00.749919 kubelet[2562]: I0428 00:14:00.749813 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-748c896d4f-kszc5" podStartSLOduration=26.783648492 podStartE2EDuration="38.749781472s" podCreationTimestamp="2026-04-28 00:13:22 +0000 UTC" firstStartedPulling="2026-04-28 00:13:47.994429407 +0000 UTC m=+47.024250658" lastFinishedPulling="2026-04-28 00:13:59.960562267 +0000 UTC m=+58.990383638" observedRunningTime="2026-04-28 00:14:00.686439659 +0000 UTC m=+59.716260910" watchObservedRunningTime="2026-04-28 00:14:00.749781472 +0000 UTC m=+59.779602723" Apr 28 00:14:00.986764 kubelet[2562]: I0428 00:14:00.986492 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7fb6cdc5d9-6h4jt" podStartSLOduration=27.424988422 podStartE2EDuration="38.986469822s" podCreationTimestamp="2026-04-28 00:13:22 +0000 UTC" firstStartedPulling="2026-04-28 00:13:47.974218319 +0000 UTC m=+47.004039530" lastFinishedPulling="2026-04-28 00:13:59.535699639 +0000 UTC m=+58.565520930" observedRunningTime="2026-04-28 00:14:00.758196667 +0000 UTC m=+59.788017918" watchObservedRunningTime="2026-04-28 00:14:00.986469822 +0000 UTC m=+60.016291033" Apr 28 00:14:01.113176 containerd[1486]: time="2026-04-28T00:14:01.112738609Z" level=info msg="StopPodSandbox for \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\"" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.226 [WARNING][5002] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b726e622-d473-4766-8608-042b869c5024", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa", Pod:"coredns-7d764666f9-7rl8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619a13a1a73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.227 [INFO][5002] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.227 [INFO][5002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" iface="eth0" netns="" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.227 [INFO][5002] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.227 [INFO][5002] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.275 [INFO][5009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.275 [INFO][5009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.275 [INFO][5009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.288 [WARNING][5009] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.288 [INFO][5009] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.294 [INFO][5009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:01.307111 containerd[1486]: 2026-04-28 00:14:01.297 [INFO][5002] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.309425 containerd[1486]: time="2026-04-28T00:14:01.306842987Z" level=info msg="TearDown network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" successfully" Apr 28 00:14:01.309425 containerd[1486]: time="2026-04-28T00:14:01.308302374Z" level=info msg="StopPodSandbox for \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" returns successfully" Apr 28 00:14:01.310763 containerd[1486]: time="2026-04-28T00:14:01.309706239Z" level=info msg="RemovePodSandbox for \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\"" Apr 28 00:14:01.343965 containerd[1486]: time="2026-04-28T00:14:01.343623761Z" level=info msg="Forcibly stopping sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\"" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.430 [WARNING][5026] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b726e622-d473-4766-8608-042b869c5024", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"0b8919881fae137afb4ac5a8260aacd009684a9983025eeb03cb62868e5210aa", Pod:"coredns-7d764666f9-7rl8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali619a13a1a73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.431 [INFO][5026] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.431 [INFO][5026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" iface="eth0" netns="" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.431 [INFO][5026] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.431 [INFO][5026] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.472 [INFO][5033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.472 [INFO][5033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.473 [INFO][5033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.491 [WARNING][5033] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.491 [INFO][5033] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" HandleID="k8s-pod-network.36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--7rl8q-eth0" Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.494 [INFO][5033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:01.500647 containerd[1486]: 2026-04-28 00:14:01.497 [INFO][5026] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67" Apr 28 00:14:01.501195 containerd[1486]: time="2026-04-28T00:14:01.500819919Z" level=info msg="TearDown network for sandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" successfully" Apr 28 00:14:01.507413 containerd[1486]: time="2026-04-28T00:14:01.507310498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:14:01.508075 containerd[1486]: time="2026-04-28T00:14:01.507431384Z" level=info msg="RemovePodSandbox \"36e5d7af3ffd7362ef9d78a10adddf73ffea0f614417be3573da97c8996c7f67\" returns successfully" Apr 28 00:14:01.509689 containerd[1486]: time="2026-04-28T00:14:01.509171784Z" level=info msg="StopPodSandbox for \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\"" Apr 28 00:14:01.670924 kubelet[2562]: I0428 00:14:01.670867 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.586 [WARNING][5047] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"cde2ebe5-98ed-4186-a2bd-c3bd8f220759", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd", Pod:"calico-apiserver-748c896d4f-j7hjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibb49a003efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.589 [INFO][5047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.589 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" iface="eth0" netns="" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.589 [INFO][5047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.589 [INFO][5047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.636 [INFO][5055] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.637 [INFO][5055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.637 [INFO][5055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.656 [WARNING][5055] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.656 [INFO][5055] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.660 [INFO][5055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:01.673413 containerd[1486]: 2026-04-28 00:14:01.668 [INFO][5047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.674651 containerd[1486]: time="2026-04-28T00:14:01.673944491Z" level=info msg="TearDown network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" successfully" Apr 28 00:14:01.674651 containerd[1486]: time="2026-04-28T00:14:01.673990533Z" level=info msg="StopPodSandbox for \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" returns successfully" Apr 28 00:14:01.675719 containerd[1486]: time="2026-04-28T00:14:01.675614048Z" level=info msg="RemovePodSandbox for \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\"" Apr 28 00:14:01.675908 containerd[1486]: time="2026-04-28T00:14:01.675816897Z" level=info msg="Forcibly stopping sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\"" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.755 [WARNING][5069] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0", GenerateName:"calico-apiserver-748c896d4f-", Namespace:"calico-system", SelfLink:"", UID:"cde2ebe5-98ed-4186-a2bd-c3bd8f220759", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748c896d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"96b153d487423ae6528951b8cbb248df0499e933d17c71a96894ffa701db75cd", Pod:"calico-apiserver-748c896d4f-j7hjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibb49a003efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.756 [INFO][5069] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.756 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" iface="eth0" netns="" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.756 [INFO][5069] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.756 [INFO][5069] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.796 [INFO][5077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.797 [INFO][5077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.797 [INFO][5077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.819 [WARNING][5077] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.819 [INFO][5077] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" HandleID="k8s-pod-network.e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--apiserver--748c896d4f--j7hjq-eth0" Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.830 [INFO][5077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:01.839759 containerd[1486]: 2026-04-28 00:14:01.833 [INFO][5069] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663" Apr 28 00:14:01.839759 containerd[1486]: time="2026-04-28T00:14:01.837866199Z" level=info msg="TearDown network for sandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" successfully" Apr 28 00:14:01.847110 containerd[1486]: time="2026-04-28T00:14:01.846783330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:14:01.847110 containerd[1486]: time="2026-04-28T00:14:01.846899695Z" level=info msg="RemovePodSandbox \"e48848c5c8e7ea044aa0104608118710e5b4f565368c53cfc32b3d1caa0a2663\" returns successfully" Apr 28 00:14:01.847851 containerd[1486]: time="2026-04-28T00:14:01.847820537Z" level=info msg="StopPodSandbox for \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\"" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.918 [WARNING][5091] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0", GenerateName:"calico-kube-controllers-7db98b7b86-", Namespace:"calico-system", SelfLink:"", UID:"bb42889c-dc68-4341-9c32-e020808fbd20", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db98b7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42", Pod:"calico-kube-controllers-7db98b7b86-8stzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eaf93cae85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.918 [INFO][5091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.918 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" iface="eth0" netns="" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.918 [INFO][5091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.918 [INFO][5091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.953 [INFO][5098] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.953 [INFO][5098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.953 [INFO][5098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.967 [WARNING][5098] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.967 [INFO][5098] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.969 [INFO][5098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:01.975933 containerd[1486]: 2026-04-28 00:14:01.972 [INFO][5091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:01.978417 containerd[1486]: time="2026-04-28T00:14:01.975863353Z" level=info msg="TearDown network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" successfully" Apr 28 00:14:01.978417 containerd[1486]: time="2026-04-28T00:14:01.976005080Z" level=info msg="StopPodSandbox for \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" returns successfully" Apr 28 00:14:01.978417 containerd[1486]: time="2026-04-28T00:14:01.976945963Z" level=info msg="RemovePodSandbox for \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\"" Apr 28 00:14:01.978417 containerd[1486]: time="2026-04-28T00:14:01.976989325Z" level=info msg="Forcibly stopping sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\"" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.067 [WARNING][5114] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0", GenerateName:"calico-kube-controllers-7db98b7b86-", Namespace:"calico-system", SelfLink:"", UID:"bb42889c-dc68-4341-9c32-e020808fbd20", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db98b7b86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"5a78567b019cd2c58f4ddd7663283b3818fe0d9704526df4cec641155395fa42", Pod:"calico-kube-controllers-7db98b7b86-8stzl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eaf93cae85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.068 [INFO][5114] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.068 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" iface="eth0" netns="" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.068 [INFO][5114] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.068 [INFO][5114] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.096 [INFO][5121] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.096 [INFO][5121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.096 [INFO][5121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.109 [WARNING][5121] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.109 [INFO][5121] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" HandleID="k8s-pod-network.cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Workload="ci--4081--3--7--n--651e172f95-k8s-calico--kube--controllers--7db98b7b86--8stzl-eth0" Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.112 [INFO][5121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:02.119532 containerd[1486]: 2026-04-28 00:14:02.115 [INFO][5114] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d" Apr 28 00:14:02.119532 containerd[1486]: time="2026-04-28T00:14:02.118125286Z" level=info msg="TearDown network for sandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" successfully" Apr 28 00:14:02.127327 containerd[1486]: time="2026-04-28T00:14:02.127265059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:14:02.128024 containerd[1486]: time="2026-04-28T00:14:02.127598234Z" level=info msg="RemovePodSandbox \"cd35f321240727e740e50da4cf4c259280fa7f7ff3f63eb26107f773188bd21d\" returns successfully" Apr 28 00:14:02.130040 containerd[1486]: time="2026-04-28T00:14:02.128910133Z" level=info msg="StopPodSandbox for \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\"" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.208 [WARNING][5136] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"26e97b01-fe08-43da-b0e6-c4023295e62a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53", Pod:"coredns-7d764666f9-ghw74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2557e6eeab5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.208 [INFO][5136] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.208 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" iface="eth0" netns="" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.208 [INFO][5136] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.208 [INFO][5136] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.235 [INFO][5143] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.236 [INFO][5143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.236 [INFO][5143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.252 [WARNING][5143] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.254 [INFO][5143] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.259 [INFO][5143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:02.267272 containerd[1486]: 2026-04-28 00:14:02.262 [INFO][5136] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.267971 containerd[1486]: time="2026-04-28T00:14:02.267361512Z" level=info msg="TearDown network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" successfully" Apr 28 00:14:02.267971 containerd[1486]: time="2026-04-28T00:14:02.267442116Z" level=info msg="StopPodSandbox for \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" returns successfully" Apr 28 00:14:02.270329 containerd[1486]: time="2026-04-28T00:14:02.270274604Z" level=info msg="RemovePodSandbox for \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\"" Apr 28 00:14:02.270329 containerd[1486]: time="2026-04-28T00:14:02.270335967Z" level=info msg="Forcibly stopping sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\"" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.349 [WARNING][5157] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"26e97b01-fe08-43da-b0e6-c4023295e62a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"68580b8bfa6b0822e2313a152bc56ef7e24fda1743126e3ec6bd18f01d05cc53", Pod:"coredns-7d764666f9-ghw74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2557e6eeab5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.350 [INFO][5157] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.350 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" iface="eth0" netns="" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.350 [INFO][5157] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.350 [INFO][5157] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.391 [INFO][5165] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.391 [INFO][5165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.391 [INFO][5165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.403 [WARNING][5165] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.404 [INFO][5165] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" HandleID="k8s-pod-network.9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Workload="ci--4081--3--7--n--651e172f95-k8s-coredns--7d764666f9--ghw74-eth0" Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.407 [INFO][5165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:02.414149 containerd[1486]: 2026-04-28 00:14:02.411 [INFO][5157] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2" Apr 28 00:14:02.415165 containerd[1486]: time="2026-04-28T00:14:02.414153509Z" level=info msg="TearDown network for sandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" successfully" Apr 28 00:14:02.420484 containerd[1486]: time="2026-04-28T00:14:02.420328628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:14:02.420484 containerd[1486]: time="2026-04-28T00:14:02.420467634Z" level=info msg="RemovePodSandbox \"9d800563a14a1ce4dbccd35c53fa322110b1cc05d1e2b2363fd0afbb6db141e2\" returns successfully" Apr 28 00:14:02.421967 containerd[1486]: time="2026-04-28T00:14:02.421467879Z" level=info msg="StopPodSandbox for \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\"" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.478 [WARNING][5179] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"c14518d0-875d-4e4a-bf96-4a7030b764e9", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8", Pod:"goldmane-7fb6cdc5d9-6h4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c194e7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.479 [INFO][5179] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.480 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" iface="eth0" netns="" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.480 [INFO][5179] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.480 [INFO][5179] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.509 [INFO][5186] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.509 [INFO][5186] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.509 [INFO][5186] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.523 [WARNING][5186] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.523 [INFO][5186] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.526 [INFO][5186] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:02.531430 containerd[1486]: 2026-04-28 00:14:02.528 [INFO][5179] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.532803 containerd[1486]: time="2026-04-28T00:14:02.532183125Z" level=info msg="TearDown network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" successfully" Apr 28 00:14:02.532803 containerd[1486]: time="2026-04-28T00:14:02.532313211Z" level=info msg="StopPodSandbox for \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" returns successfully" Apr 28 00:14:02.533647 containerd[1486]: time="2026-04-28T00:14:02.533606069Z" level=info msg="RemovePodSandbox for \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\"" Apr 28 00:14:02.533759 containerd[1486]: time="2026-04-28T00:14:02.533694753Z" level=info msg="Forcibly stopping sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\"" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.596 [WARNING][5200] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"c14518d0-875d-4e4a-bf96-4a7030b764e9", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 28, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-n-651e172f95", ContainerID:"7871851457b0a8a2e4441ceddfe6ac142ced4c3c593f0d09c2491fba1adab0f8", Pod:"goldmane-7fb6cdc5d9-6h4jt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c194e7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.596 [INFO][5200] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.596 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" iface="eth0" netns="" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.596 [INFO][5200] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.596 [INFO][5200] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.622 [INFO][5207] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.622 [INFO][5207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.622 [INFO][5207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.636 [WARNING][5207] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.636 [INFO][5207] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" HandleID="k8s-pod-network.8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Workload="ci--4081--3--7--n--651e172f95-k8s-goldmane--7fb6cdc5d9--6h4jt-eth0" Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.639 [INFO][5207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 28 00:14:02.645036 containerd[1486]: 2026-04-28 00:14:02.641 [INFO][5200] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32" Apr 28 00:14:02.645036 containerd[1486]: time="2026-04-28T00:14:02.644979864Z" level=info msg="TearDown network for sandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" successfully" Apr 28 00:14:02.654683 containerd[1486]: time="2026-04-28T00:14:02.654586738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:14:02.654891 containerd[1486]: time="2026-04-28T00:14:02.654737505Z" level=info msg="RemovePodSandbox \"8bcb3c9c3b335f8c8d5b9bbb0a884b99232be7269caff42d886da74d39db0f32\" returns successfully" Apr 28 00:14:03.083388 containerd[1486]: time="2026-04-28T00:14:03.082328012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:03.085340 containerd[1486]: time="2026-04-28T00:14:03.085289303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=5896864" Apr 28 00:14:03.086815 containerd[1486]: time="2026-04-28T00:14:03.086769809Z" level=info msg="ImageCreate event name:\"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:03.091059 containerd[1486]: time="2026-04-28T00:14:03.090996477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:03.093145 containerd[1486]: time="2026-04-28T00:14:03.093017287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"8472495\" in 3.132279491s" Apr 28 00:14:03.093387 containerd[1486]: time="2026-04-28T00:14:03.093365582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\"" Apr 28 00:14:03.098010 containerd[1486]: time="2026-04-28T00:14:03.097947946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 28 00:14:03.104371 containerd[1486]: time="2026-04-28T00:14:03.104135861Z" level=info msg="CreateContainer within sandbox \"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 28 00:14:03.129356 containerd[1486]: time="2026-04-28T00:14:03.128644549Z" level=info msg="CreateContainer within sandbox \"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f31107373e4d666403c73550dacd3c30bd19f91f700f58119bf757b126fbd743\"" Apr 28 00:14:03.131685 containerd[1486]: time="2026-04-28T00:14:03.130808885Z" level=info msg="StartContainer for \"f31107373e4d666403c73550dacd3c30bd19f91f700f58119bf757b126fbd743\"" Apr 28 00:14:03.132522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565192459.mount: Deactivated successfully. Apr 28 00:14:03.190020 systemd[1]: Started cri-containerd-f31107373e4d666403c73550dacd3c30bd19f91f700f58119bf757b126fbd743.scope - libcontainer container f31107373e4d666403c73550dacd3c30bd19f91f700f58119bf757b126fbd743. Apr 28 00:14:03.233617 containerd[1486]: time="2026-04-28T00:14:03.233376882Z" level=info msg="StartContainer for \"f31107373e4d666403c73550dacd3c30bd19f91f700f58119bf757b126fbd743\" returns successfully" Apr 28 00:14:05.132114 containerd[1486]: time="2026-04-28T00:14:05.132023742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:05.134303 containerd[1486]: time="2026-04-28T00:14:05.134241997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=12456618" Apr 28 00:14:05.135862 containerd[1486]: time="2026-04-28T00:14:05.135801664Z" level=info msg="ImageCreate event name:\"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:05.140219 containerd[1486]: time="2026-04-28T00:14:05.140150491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:05.142880 containerd[1486]: time="2026-04-28T00:14:05.142635918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"15032209\" in 2.04462669s" Apr 28 00:14:05.142880 containerd[1486]: time="2026-04-28T00:14:05.142732762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\"" Apr 28 00:14:05.146136 containerd[1486]: time="2026-04-28T00:14:05.145206789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 28 00:14:05.152398 containerd[1486]: time="2026-04-28T00:14:05.152339535Z" level=info msg="CreateContainer within sandbox \"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 28 00:14:05.174352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46671986.mount: Deactivated successfully. Apr 28 00:14:05.181193 containerd[1486]: time="2026-04-28T00:14:05.181016568Z" level=info msg="CreateContainer within sandbox \"c1b464e398734d5dcae0ca90d56375d6497da346e7d2120c7c425a73cef5b415\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc62424b7f9c08fe72b3ca9c2f8d3d9a7496feb3f092dc1f964c799f449784a1\"" Apr 28 00:14:05.183858 containerd[1486]: time="2026-04-28T00:14:05.181939248Z" level=info msg="StartContainer for \"bc62424b7f9c08fe72b3ca9c2f8d3d9a7496feb3f092dc1f964c799f449784a1\"" Apr 28 00:14:05.229016 systemd[1]: Started cri-containerd-bc62424b7f9c08fe72b3ca9c2f8d3d9a7496feb3f092dc1f964c799f449784a1.scope - libcontainer container bc62424b7f9c08fe72b3ca9c2f8d3d9a7496feb3f092dc1f964c799f449784a1. Apr 28 00:14:05.270059 containerd[1486]: time="2026-04-28T00:14:05.269939351Z" level=info msg="StartContainer for \"bc62424b7f9c08fe72b3ca9c2f8d3d9a7496feb3f092dc1f964c799f449784a1\" returns successfully" Apr 28 00:14:06.241253 kubelet[2562]: I0428 00:14:06.240875 2562 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 28 00:14:06.241253 kubelet[2562]: I0428 00:14:06.240952 2562 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 28 00:14:06.435566 kubelet[2562]: I0428 00:14:06.434326 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:14:06.472564 kubelet[2562]: I0428 00:14:06.472338 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-687qr" podStartSLOduration=24.910092091 podStartE2EDuration="43.472308223s" podCreationTimestamp="2026-04-28 00:13:23 +0000 UTC" firstStartedPulling="2026-04-28 00:13:46.582876372 +0000 UTC m=+45.612697583" lastFinishedPulling="2026-04-28 00:14:05.145092464 +0000 UTC m=+64.174913715" observedRunningTime="2026-04-28 00:14:05.727444582 +0000 UTC m=+64.757265833" watchObservedRunningTime="2026-04-28 00:14:06.472308223 +0000 UTC m=+65.502129474" Apr 28 00:14:07.460741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923399642.mount: Deactivated successfully. Apr 28 00:14:07.482844 containerd[1486]: time="2026-04-28T00:14:07.482771323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:07.485093 containerd[1486]: time="2026-04-28T00:14:07.485006936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=15624823" Apr 28 00:14:07.492783 containerd[1486]: time="2026-04-28T00:14:07.492701097Z" level=info msg="ImageCreate event name:\"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:07.496904 containerd[1486]: time="2026-04-28T00:14:07.496577539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:14:07.498567 containerd[1486]: time="2026-04-28T00:14:07.498378694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"15624653\" in 2.353124984s" Apr 28 00:14:07.498567 containerd[1486]: time="2026-04-28T00:14:07.498428896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\"" Apr 28 00:14:07.506875 containerd[1486]: time="2026-04-28T00:14:07.506808446Z" level=info msg="CreateContainer within sandbox \"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 28 00:14:07.530724 containerd[1486]: time="2026-04-28T00:14:07.530604279Z" level=info msg="CreateContainer within sandbox \"68b90606991c3b7ee5faa9ef8e69f6939dd5b4a4a6c73596216f8e4d5de13868\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"81e21a07f82a8c82bb50b88f9742f136df310ba6b7cd110391d054f63daea4c2\"" Apr 28 00:14:07.531870 containerd[1486]: time="2026-04-28T00:14:07.531779688Z" level=info msg="StartContainer for \"81e21a07f82a8c82bb50b88f9742f136df310ba6b7cd110391d054f63daea4c2\"" Apr 28 00:14:07.580970 systemd[1]: Started cri-containerd-81e21a07f82a8c82bb50b88f9742f136df310ba6b7cd110391d054f63daea4c2.scope - libcontainer container 81e21a07f82a8c82bb50b88f9742f136df310ba6b7cd110391d054f63daea4c2. Apr 28 00:14:07.633797 containerd[1486]: time="2026-04-28T00:14:07.633493854Z" level=info msg="StartContainer for \"81e21a07f82a8c82bb50b88f9742f136df310ba6b7cd110391d054f63daea4c2\" returns successfully" Apr 28 00:14:07.741337 kubelet[2562]: I0428 00:14:07.740767 2562 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-668955cff-wfhj9" podStartSLOduration=1.667646169 podStartE2EDuration="20.740745651s" podCreationTimestamp="2026-04-28 00:13:47 +0000 UTC" firstStartedPulling="2026-04-28 00:13:48.427040166 +0000 UTC m=+47.456861417" lastFinishedPulling="2026-04-28 00:14:07.500139648 +0000 UTC m=+66.529960899" observedRunningTime="2026-04-28 00:14:07.739477918 +0000 UTC m=+66.769299209" watchObservedRunningTime="2026-04-28 00:14:07.740745651 +0000 UTC m=+66.770566942" Apr 28 00:14:16.554779 systemd[1]: run-containerd-runc-k8s.io-4035ad8342ca424126e89282f5ad9dde5e3abc5e18d7b3e88113cfc7925fbd3a-runc.cDmTUs.mount: Deactivated successfully. Apr 28 00:14:41.252194 kubelet[2562]: I0428 00:14:41.251427 2562 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:15:28.231983 systemd[1]: Started sshd@7-178.105.25.61:22-50.85.169.122:42164.service - OpenSSH per-connection server daemon (50.85.169.122:42164). Apr 28 00:15:28.369739 sshd[5629]: Accepted publickey for core from 50.85.169.122 port 42164 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:28.371523 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:28.379404 systemd-logind[1471]: New session 8 of user core. Apr 28 00:15:28.384049 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 00:15:28.571087 sshd[5629]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:28.576863 systemd[1]: sshd@7-178.105.25.61:22-50.85.169.122:42164.service: Deactivated successfully. Apr 28 00:15:28.579337 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 00:15:28.580409 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Apr 28 00:15:28.582431 systemd-logind[1471]: Removed session 8. Apr 28 00:15:33.596743 systemd[1]: Started sshd@8-178.105.25.61:22-50.85.169.122:47856.service - OpenSSH per-connection server daemon (50.85.169.122:47856). Apr 28 00:15:33.718542 sshd[5662]: Accepted publickey for core from 50.85.169.122 port 47856 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:33.720258 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:33.726803 systemd-logind[1471]: New session 9 of user core. Apr 28 00:15:33.733993 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 00:15:33.909899 sshd[5662]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:33.916810 systemd[1]: sshd@8-178.105.25.61:22-50.85.169.122:47856.service: Deactivated successfully. Apr 28 00:15:33.919936 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 00:15:33.922281 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Apr 28 00:15:33.923526 systemd-logind[1471]: Removed session 9. Apr 28 00:15:38.949990 systemd[1]: Started sshd@9-178.105.25.61:22-50.85.169.122:47858.service - OpenSSH per-connection server daemon (50.85.169.122:47858). Apr 28 00:15:39.086742 sshd[5678]: Accepted publickey for core from 50.85.169.122 port 47858 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:39.087946 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:39.094470 systemd-logind[1471]: New session 10 of user core. Apr 28 00:15:39.104114 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 00:15:39.283706 sshd[5678]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:39.288447 systemd[1]: sshd@9-178.105.25.61:22-50.85.169.122:47858.service: Deactivated successfully. Apr 28 00:15:39.290465 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 00:15:39.291555 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Apr 28 00:15:39.293140 systemd-logind[1471]: Removed session 10. Apr 28 00:15:44.319081 systemd[1]: Started sshd@10-178.105.25.61:22-50.85.169.122:47382.service - OpenSSH per-connection server daemon (50.85.169.122:47382). Apr 28 00:15:44.441323 sshd[5712]: Accepted publickey for core from 50.85.169.122 port 47382 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:44.443744 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:44.450373 systemd-logind[1471]: New session 11 of user core. Apr 28 00:15:44.456068 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 00:15:44.639777 sshd[5712]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:44.644995 systemd[1]: sshd@10-178.105.25.61:22-50.85.169.122:47382.service: Deactivated successfully. Apr 28 00:15:44.649355 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 00:15:44.651219 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Apr 28 00:15:44.652285 systemd-logind[1471]: Removed session 11. Apr 28 00:15:49.682075 systemd[1]: Started sshd@11-178.105.25.61:22-50.85.169.122:53084.service - OpenSSH per-connection server daemon (50.85.169.122:53084). Apr 28 00:15:49.812920 sshd[5753]: Accepted publickey for core from 50.85.169.122 port 53084 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:49.816514 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:49.824128 systemd-logind[1471]: New session 12 of user core. Apr 28 00:15:49.832020 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 00:15:50.022956 sshd[5753]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:50.031175 systemd[1]: sshd@11-178.105.25.61:22-50.85.169.122:53084.service: Deactivated successfully. Apr 28 00:15:50.034708 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 00:15:50.035561 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Apr 28 00:15:50.036958 systemd-logind[1471]: Removed session 12. Apr 28 00:15:50.045950 systemd[1]: Started sshd@12-178.105.25.61:22-50.85.169.122:53096.service - OpenSSH per-connection server daemon (50.85.169.122:53096). Apr 28 00:15:50.175224 sshd[5767]: Accepted publickey for core from 50.85.169.122 port 53096 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:50.178729 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:50.185013 systemd-logind[1471]: New session 13 of user core. Apr 28 00:15:50.195887 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 00:15:50.421017 sshd[5767]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:50.427045 systemd[1]: sshd@12-178.105.25.61:22-50.85.169.122:53096.service: Deactivated successfully. Apr 28 00:15:50.433995 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 00:15:50.436312 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Apr 28 00:15:50.455131 systemd[1]: Started sshd@13-178.105.25.61:22-50.85.169.122:53098.service - OpenSSH per-connection server daemon (50.85.169.122:53098). Apr 28 00:15:50.456457 systemd-logind[1471]: Removed session 13. Apr 28 00:15:50.578426 sshd[5778]: Accepted publickey for core from 50.85.169.122 port 53098 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:50.580844 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:50.588602 systemd-logind[1471]: New session 14 of user core. Apr 28 00:15:50.597942 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 00:15:50.782113 sshd[5778]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:50.786829 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Apr 28 00:15:50.788504 systemd[1]: sshd@13-178.105.25.61:22-50.85.169.122:53098.service: Deactivated successfully. Apr 28 00:15:50.792385 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 00:15:50.793829 systemd-logind[1471]: Removed session 14. Apr 28 00:15:55.813176 systemd[1]: Started sshd@14-178.105.25.61:22-50.85.169.122:53112.service - OpenSSH per-connection server daemon (50.85.169.122:53112). Apr 28 00:15:55.930239 sshd[5790]: Accepted publickey for core from 50.85.169.122 port 53112 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:55.931543 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:55.937356 systemd-logind[1471]: New session 15 of user core. Apr 28 00:15:55.941895 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 00:15:56.121743 sshd[5790]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:56.129971 systemd[1]: sshd@14-178.105.25.61:22-50.85.169.122:53112.service: Deactivated successfully. Apr 28 00:15:56.133582 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 00:15:56.135204 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Apr 28 00:15:56.156142 systemd[1]: Started sshd@15-178.105.25.61:22-50.85.169.122:53114.service - OpenSSH per-connection server daemon (50.85.169.122:53114). Apr 28 00:15:56.157341 systemd-logind[1471]: Removed session 15. Apr 28 00:15:56.271182 sshd[5803]: Accepted publickey for core from 50.85.169.122 port 53114 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:56.273435 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:56.280735 systemd-logind[1471]: New session 16 of user core. Apr 28 00:15:56.288226 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 00:15:56.650932 sshd[5803]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:56.657342 systemd[1]: sshd@15-178.105.25.61:22-50.85.169.122:53114.service: Deactivated successfully. Apr 28 00:15:56.657857 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Apr 28 00:15:56.663738 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 00:15:56.665711 systemd-logind[1471]: Removed session 16. Apr 28 00:15:56.681903 systemd[1]: Started sshd@16-178.105.25.61:22-50.85.169.122:53118.service - OpenSSH per-connection server daemon (50.85.169.122:53118). Apr 28 00:15:56.807780 sshd[5814]: Accepted publickey for core from 50.85.169.122 port 53118 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:56.810618 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:56.818200 systemd-logind[1471]: New session 17 of user core. Apr 28 00:15:56.823054 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 00:15:57.777376 sshd[5814]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:57.785124 systemd[1]: sshd@16-178.105.25.61:22-50.85.169.122:53118.service: Deactivated successfully. Apr 28 00:15:57.791572 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 00:15:57.793012 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Apr 28 00:15:57.803692 systemd[1]: Started sshd@17-178.105.25.61:22-50.85.169.122:53126.service - OpenSSH per-connection server daemon (50.85.169.122:53126). Apr 28 00:15:57.805768 systemd-logind[1471]: Removed session 17. Apr 28 00:15:57.955887 sshd[5852]: Accepted publickey for core from 50.85.169.122 port 53126 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:57.959419 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:57.968214 systemd-logind[1471]: New session 18 of user core. Apr 28 00:15:57.974820 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 00:15:58.346587 sshd[5852]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:58.352207 systemd[1]: sshd@17-178.105.25.61:22-50.85.169.122:53126.service: Deactivated successfully. Apr 28 00:15:58.358313 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 00:15:58.359969 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Apr 28 00:15:58.379234 systemd[1]: Started sshd@18-178.105.25.61:22-50.85.169.122:53128.service - OpenSSH per-connection server daemon (50.85.169.122:53128). Apr 28 00:15:58.381090 systemd-logind[1471]: Removed session 18. Apr 28 00:15:58.505131 sshd[5864]: Accepted publickey for core from 50.85.169.122 port 53128 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:15:58.508861 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:15:58.517076 systemd-logind[1471]: New session 19 of user core. Apr 28 00:15:58.520946 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 00:15:58.714064 sshd[5864]: pam_unix(sshd:session): session closed for user core Apr 28 00:15:58.721056 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Apr 28 00:15:58.721589 systemd[1]: sshd@18-178.105.25.61:22-50.85.169.122:53128.service: Deactivated successfully. Apr 28 00:15:58.725077 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 00:15:58.727746 systemd-logind[1471]: Removed session 19. Apr 28 00:16:03.758207 systemd[1]: Started sshd@19-178.105.25.61:22-50.85.169.122:52212.service - OpenSSH per-connection server daemon (50.85.169.122:52212). Apr 28 00:16:03.879150 sshd[5904]: Accepted publickey for core from 50.85.169.122 port 52212 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:16:03.881885 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:16:03.887613 systemd-logind[1471]: New session 20 of user core. Apr 28 00:16:03.891083 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 00:16:04.079915 sshd[5904]: pam_unix(sshd:session): session closed for user core Apr 28 00:16:04.085447 systemd[1]: sshd@19-178.105.25.61:22-50.85.169.122:52212.service: Deactivated successfully. Apr 28 00:16:04.089313 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 00:16:04.090862 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Apr 28 00:16:04.092217 systemd-logind[1471]: Removed session 20. Apr 28 00:16:09.113396 systemd[1]: Started sshd@20-178.105.25.61:22-50.85.169.122:52216.service - OpenSSH per-connection server daemon (50.85.169.122:52216). Apr 28 00:16:09.248773 sshd[5918]: Accepted publickey for core from 50.85.169.122 port 52216 ssh2: RSA SHA256:0j9rnzg//LrMaH1kTEcAP6LieMSKEVjCW+ZXnbaTdVE Apr 28 00:16:09.250590 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:16:09.256742 systemd-logind[1471]: New session 21 of user core. Apr 28 00:16:09.264045 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 00:16:09.441162 sshd[5918]: pam_unix(sshd:session): session closed for user core Apr 28 00:16:09.447581 systemd[1]: sshd@20-178.105.25.61:22-50.85.169.122:52216.service: Deactivated successfully. Apr 28 00:16:09.451362 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 00:16:09.452752 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Apr 28 00:16:09.454708 systemd-logind[1471]: Removed session 21. Apr 28 00:16:24.353182 systemd[1]: cri-containerd-5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f.scope: Deactivated successfully. Apr 28 00:16:24.354144 systemd[1]: cri-containerd-5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f.scope: Consumed 14.026s CPU time. Apr 28 00:16:24.378554 containerd[1486]: time="2026-04-28T00:16:24.378413532Z" level=info msg="shim disconnected" id=5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f namespace=k8s.io Apr 28 00:16:24.378554 containerd[1486]: time="2026-04-28T00:16:24.378469573Z" level=warning msg="cleaning up after shim disconnected" id=5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f namespace=k8s.io Apr 28 00:16:24.378554 containerd[1486]: time="2026-04-28T00:16:24.378479413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:24.380385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f-rootfs.mount: Deactivated successfully. Apr 28 00:16:24.537417 kubelet[2562]: E0428 00:16:24.537281 2562 controller.go:251] "Failed to update lease" err="Put \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 28 00:16:24.761729 kubelet[2562]: E0428 00:16:24.760565 2562 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43370->10.0.0.2:2379: read: connection timed out" Apr 28 00:16:25.175101 kubelet[2562]: I0428 00:16:25.175068 2562 scope.go:122] "RemoveContainer" containerID="5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f" Apr 28 00:16:25.179643 containerd[1486]: time="2026-04-28T00:16:25.179604481Z" level=info msg="CreateContainer within sandbox \"ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 28 00:16:25.197083 containerd[1486]: time="2026-04-28T00:16:25.196931390Z" level=info msg="CreateContainer within sandbox \"ed13d46abfae1991158fc6c5160344c5c38b496d2573cb19bfb2ca5cf4b9b12e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9\"" Apr 28 00:16:25.197346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486531432.mount: Deactivated successfully. Apr 28 00:16:25.200510 containerd[1486]: time="2026-04-28T00:16:25.198974240Z" level=info msg="StartContainer for \"d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9\"" Apr 28 00:16:25.245996 systemd[1]: Started cri-containerd-d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9.scope - libcontainer container d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9. Apr 28 00:16:25.275668 containerd[1486]: time="2026-04-28T00:16:25.275599095Z" level=info msg="StartContainer for \"d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9\" returns successfully" Apr 28 00:16:25.380528 systemd[1]: run-containerd-runc-k8s.io-d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9-runc.GdprAE.mount: Deactivated successfully. Apr 28 00:16:25.483375 systemd[1]: cri-containerd-9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca.scope: Deactivated successfully. Apr 28 00:16:25.487004 systemd[1]: cri-containerd-9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca.scope: Consumed 4.072s CPU time, 15.6M memory peak, 0B memory swap peak. Apr 28 00:16:25.511365 containerd[1486]: time="2026-04-28T00:16:25.511128439Z" level=info msg="shim disconnected" id=9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca namespace=k8s.io Apr 28 00:16:25.511365 containerd[1486]: time="2026-04-28T00:16:25.511185000Z" level=warning msg="cleaning up after shim disconnected" id=9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca namespace=k8s.io Apr 28 00:16:25.511365 containerd[1486]: time="2026-04-28T00:16:25.511193960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:25.514002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca-rootfs.mount: Deactivated successfully. Apr 28 00:16:25.524731 containerd[1486]: time="2026-04-28T00:16:25.524683054Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:16:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:16:26.181697 kubelet[2562]: I0428 00:16:26.181022 2562 scope.go:122] "RemoveContainer" containerID="9efd620b3fa3d7001a7fb614b5578268ad9a3a7796ea14a54b06f9961e40d7ca" Apr 28 00:16:26.185840 containerd[1486]: time="2026-04-28T00:16:26.185413317Z" level=info msg="CreateContainer within sandbox \"230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 28 00:16:26.211908 containerd[1486]: time="2026-04-28T00:16:26.211799690Z" level=info msg="CreateContainer within sandbox \"230536ad5af556cdba89c7d4020499417b5bc67f06bcec9e78ea6ab45bb5b60b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9e3cdf0a625e96bad403b65611264b8d8393e19bbc5758efd51ff68933d14f4a\"" Apr 28 00:16:26.212789 containerd[1486]: time="2026-04-28T00:16:26.212624270Z" level=info msg="StartContainer for \"9e3cdf0a625e96bad403b65611264b8d8393e19bbc5758efd51ff68933d14f4a\"" Apr 28 00:16:26.253039 systemd[1]: Started cri-containerd-9e3cdf0a625e96bad403b65611264b8d8393e19bbc5758efd51ff68933d14f4a.scope - libcontainer container 9e3cdf0a625e96bad403b65611264b8d8393e19bbc5758efd51ff68933d14f4a. Apr 28 00:16:26.303598 containerd[1486]: time="2026-04-28T00:16:26.303377477Z" level=info msg="StartContainer for \"9e3cdf0a625e96bad403b65611264b8d8393e19bbc5758efd51ff68933d14f4a\" returns successfully" Apr 28 00:16:29.055000 kubelet[2562]: E0428 00:16:29.051277 2562 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43026->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-n-651e172f95.18aa5d1c300975fd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-n-651e172f95,UID:186ed3205f45f1d8fa54b55ea0789eaa,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-n-651e172f95,},FirstTimestamp:2026-04-28 00:16:18.597475837 +0000 UTC m=+197.627297088,LastTimestamp:2026-04-28 00:16:18.597475837 +0000 UTC m=+197.627297088,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-n-651e172f95,}" Apr 28 00:16:29.748172 systemd[1]: cri-containerd-47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9.scope: Deactivated successfully. Apr 28 00:16:29.750065 systemd[1]: cri-containerd-47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9.scope: Consumed 2.627s CPU time, 16.2M memory peak, 0B memory swap peak. Apr 28 00:16:29.782982 containerd[1486]: time="2026-04-28T00:16:29.782820401Z" level=info msg="shim disconnected" id=47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9 namespace=k8s.io Apr 28 00:16:29.782982 containerd[1486]: time="2026-04-28T00:16:29.782889079Z" level=warning msg="cleaning up after shim disconnected" id=47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9 namespace=k8s.io Apr 28 00:16:29.782982 containerd[1486]: time="2026-04-28T00:16:29.782899319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:29.784830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9-rootfs.mount: Deactivated successfully. Apr 28 00:16:30.206070 kubelet[2562]: I0428 00:16:30.205758 2562 scope.go:122] "RemoveContainer" containerID="47eb6813e8d4f0f2444b50f373b573232c2a9e2ee3e77547385bb130e32cada9" Apr 28 00:16:30.209242 containerd[1486]: time="2026-04-28T00:16:30.209185332Z" level=info msg="CreateContainer within sandbox \"882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 28 00:16:30.232686 containerd[1486]: time="2026-04-28T00:16:30.232605327Z" level=info msg="CreateContainer within sandbox \"882b6dfb0124e3d32d0caed1d0b2d822fc612af502911ee89ab8cff8c7498cba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778\"" Apr 28 00:16:30.234355 containerd[1486]: time="2026-04-28T00:16:30.233106873Z" level=info msg="StartContainer for \"8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778\"" Apr 28 00:16:30.269904 systemd[1]: Started cri-containerd-8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778.scope - libcontainer container 8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778. Apr 28 00:16:30.308452 containerd[1486]: time="2026-04-28T00:16:30.308382639Z" level=info msg="StartContainer for \"8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778\" returns successfully" Apr 28 00:16:30.785169 systemd[1]: run-containerd-runc-k8s.io-8458ced46d63931de715f649aac52c529c6073e882f2db8e278ee76ce0d2a778-runc.qzEzxG.mount: Deactivated successfully. Apr 28 00:16:34.762565 kubelet[2562]: E0428 00:16:34.762226 2562 controller.go:251] "Failed to update lease" err="Put \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 28 00:16:36.668525 systemd[1]: cri-containerd-d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9.scope: Deactivated successfully. Apr 28 00:16:36.690543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9-rootfs.mount: Deactivated successfully. Apr 28 00:16:36.699750 containerd[1486]: time="2026-04-28T00:16:36.699465918Z" level=info msg="shim disconnected" id=d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9 namespace=k8s.io Apr 28 00:16:36.699750 containerd[1486]: time="2026-04-28T00:16:36.699536476Z" level=warning msg="cleaning up after shim disconnected" id=d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9 namespace=k8s.io Apr 28 00:16:36.699750 containerd[1486]: time="2026-04-28T00:16:36.699549116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:16:37.235523 kubelet[2562]: I0428 00:16:37.234952 2562 scope.go:122] "RemoveContainer" containerID="5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f" Apr 28 00:16:37.235523 kubelet[2562]: I0428 00:16:37.235277 2562 scope.go:122] "RemoveContainer" containerID="d67691cb1584fd57a7315833dea162e7e5b25c8affb379c0763a8ae17491e9a9" Apr 28 00:16:37.235523 kubelet[2562]: E0428 00:16:37.235415 2562 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-687949b757-zc7vd_tigera-operator(a1e64b4e-773d-46ec-a20b-d17d09c0a19d)\"" pod="tigera-operator/tigera-operator-687949b757-zc7vd" podUID="a1e64b4e-773d-46ec-a20b-d17d09c0a19d" Apr 28 00:16:37.237622 containerd[1486]: time="2026-04-28T00:16:37.237574059Z" level=info msg="RemoveContainer for \"5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f\"" Apr 28 00:16:37.249145 containerd[1486]: time="2026-04-28T00:16:37.249076678Z" level=info msg="RemoveContainer for \"5e1638bded86bb9711e07e613461cd3f1cba16016f13649c0cf6ced54328e22f\" returns successfully" Apr 28 00:16:44.763118 kubelet[2562]: E0428 00:16:44.762535 2562 controller.go:251] "Failed to update lease" err="Put \"https://178.105.25.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-n-651e172f95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"