Mar 12 00:43:18.195321 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 12 00:43:18.195342 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Mar 11 22:42:47 -00 2026 Mar 12 00:43:18.195350 kernel: KASLR enabled Mar 12 00:43:18.195356 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 12 00:43:18.195364 kernel: printk: bootconsole [pl11] enabled Mar 12 00:43:18.195369 kernel: efi: EFI v2.7 by EDK II Mar 12 00:43:18.195377 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 12 00:43:18.195383 kernel: random: crng init done Mar 12 00:43:18.195389 kernel: ACPI: Early table checksum verification disabled Mar 12 00:43:18.195395 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 12 00:43:18.195401 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195407 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195414 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 12 00:43:18.195421 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195428 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195434 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195441 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195449 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195456 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195462 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 12 00:43:18.195469 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195475 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 12 00:43:18.195482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 12 00:43:18.195488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 12 00:43:18.195494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 12 00:43:18.195501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 12 00:43:18.195507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 12 00:43:18.195513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 12 00:43:18.195521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 12 00:43:18.195527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 12 00:43:18.195534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 12 00:43:18.195540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 12 00:43:18.195547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 12 00:43:18.195553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 12 00:43:18.195559 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 12 00:43:18.195565 kernel: Zone ranges: Mar 12 00:43:18.195572 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 12 00:43:18.195578 kernel: DMA32 empty Mar 12 00:43:18.195584 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 00:43:18.195591 kernel: Movable zone start for each node Mar 12 00:43:18.195602 kernel: Early memory node ranges Mar 12 00:43:18.195608 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 12 00:43:18.195615 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 12 00:43:18.195622 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 12 00:43:18.195629 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 12 00:43:18.195637 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 12 00:43:18.195644 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 12 00:43:18.195651 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 00:43:18.197761 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 12 00:43:18.197775 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 12 00:43:18.197782 kernel: psci: probing for conduit method from ACPI. Mar 12 00:43:18.197789 kernel: psci: PSCIv1.1 detected in firmware. Mar 12 00:43:18.197796 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 00:43:18.197803 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 12 00:43:18.197810 kernel: psci: SMC Calling Convention v1.4 Mar 12 00:43:18.197817 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 12 00:43:18.197824 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 12 00:43:18.197836 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 12 00:43:18.197843 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 12 00:43:18.197850 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 00:43:18.197857 kernel: Detected PIPT I-cache on CPU0 Mar 12 00:43:18.197864 kernel: CPU features: detected: GIC system register CPU interface Mar 12 00:43:18.197871 kernel: CPU features: detected: Hardware dirty bit management Mar 12 00:43:18.197877 kernel: CPU features: detected: Spectre-BHB Mar 12 00:43:18.197884 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 00:43:18.197891 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 00:43:18.197898 kernel: CPU features: detected: ARM erratum 1418040 Mar 12 00:43:18.197905 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 12 00:43:18.197913 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 00:43:18.197920 kernel: alternatives: applying boot alternatives Mar 12 00:43:18.197929 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ea6452ef6439cfe8258f05036d5ce6d908887775193e3b46c412fba933f4f4f3 Mar 12 00:43:18.197936 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 00:43:18.197943 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 00:43:18.197950 kernel: Fallback order for Node 0: 0 Mar 12 00:43:18.197956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 12 00:43:18.197963 kernel: Policy zone: Normal Mar 12 00:43:18.197970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 00:43:18.197977 kernel: software IO TLB: area num 2. Mar 12 00:43:18.197984 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 12 00:43:18.197992 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 12 00:43:18.198000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 00:43:18.198006 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 00:43:18.198014 kernel: rcu: RCU event tracing is enabled. Mar 12 00:43:18.198022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 00:43:18.198029 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 00:43:18.198036 kernel: Tracing variant of Tasks RCU enabled. Mar 12 00:43:18.198042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 00:43:18.198050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 00:43:18.198056 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 00:43:18.198063 kernel: GICv3: 960 SPIs implemented Mar 12 00:43:18.198071 kernel: GICv3: 0 Extended SPIs implemented Mar 12 00:43:18.198078 kernel: Root IRQ handler: gic_handle_irq Mar 12 00:43:18.198085 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 12 00:43:18.198092 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 12 00:43:18.198099 kernel: ITS: No ITS available, not enabling LPIs Mar 12 00:43:18.198106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 00:43:18.198113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 00:43:18.198120 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 12 00:43:18.198127 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 12 00:43:18.198134 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 12 00:43:18.198141 kernel: Console: colour dummy device 80x25 Mar 12 00:43:18.198149 kernel: printk: console [tty1] enabled Mar 12 00:43:18.198157 kernel: ACPI: Core revision 20230628 Mar 12 00:43:18.198164 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 12 00:43:18.198171 kernel: pid_max: default: 32768 minimum: 301 Mar 12 00:43:18.198178 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 00:43:18.198186 kernel: landlock: Up and running. Mar 12 00:43:18.198193 kernel: SELinux: Initializing. Mar 12 00:43:18.198200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198207 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198216 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 00:43:18.198223 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 00:43:18.198230 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 12 00:43:18.198237 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 12 00:43:18.198244 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 12 00:43:18.198251 kernel: rcu: Hierarchical SRCU implementation. Mar 12 00:43:18.198258 kernel: rcu: Max phase no-delay instances is 400. Mar 12 00:43:18.198266 kernel: Remapping and enabling EFI services. Mar 12 00:43:18.198279 kernel: smp: Bringing up secondary CPUs ... Mar 12 00:43:18.198286 kernel: Detected PIPT I-cache on CPU1 Mar 12 00:43:18.198293 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 12 00:43:18.198301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 00:43:18.198309 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 12 00:43:18.198317 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 00:43:18.198324 kernel: SMP: Total of 2 processors activated. Mar 12 00:43:18.198332 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 00:43:18.198339 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 12 00:43:18.198348 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 00:43:18.198356 kernel: CPU features: detected: CRC32 instructions Mar 12 00:43:18.198363 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 00:43:18.198371 kernel: CPU features: detected: LSE atomic instructions Mar 12 00:43:18.198378 kernel: CPU features: detected: Privileged Access Never Mar 12 00:43:18.198386 kernel: CPU: All CPU(s) started at EL1 Mar 12 00:43:18.198393 kernel: alternatives: applying system-wide alternatives Mar 12 00:43:18.198401 kernel: devtmpfs: initialized Mar 12 00:43:18.198408 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 00:43:18.198417 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 00:43:18.198425 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 00:43:18.198432 kernel: SMBIOS 3.1.0 present. Mar 12 00:43:18.198440 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 12 00:43:18.198448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 00:43:18.198455 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 00:43:18.198463 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 00:43:18.198470 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 00:43:18.198478 kernel: audit: initializing netlink subsys (disabled) Mar 12 00:43:18.198487 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 12 00:43:18.198494 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 00:43:18.198502 kernel: cpuidle: using governor menu Mar 12 00:43:18.198509 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 00:43:18.198517 kernel: ASID allocator initialised with 32768 entries Mar 12 00:43:18.198524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 00:43:18.198531 kernel: Serial: AMBA PL011 UART driver Mar 12 00:43:18.198539 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 00:43:18.198546 kernel: Modules: 0 pages in range for non-PLT usage Mar 12 00:43:18.198555 kernel: Modules: 509008 pages in range for PLT usage Mar 12 00:43:18.198563 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 00:43:18.198570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 00:43:18.198577 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 00:43:18.198585 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 00:43:18.198592 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 00:43:18.198600 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 00:43:18.198607 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 00:43:18.198615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 00:43:18.198624 kernel: ACPI: Added _OSI(Module Device) Mar 12 00:43:18.198631 kernel: ACPI: Added _OSI(Processor Device) Mar 12 00:43:18.198639 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 00:43:18.198646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 00:43:18.198653 kernel: ACPI: Interpreter enabled Mar 12 00:43:18.198669 kernel: ACPI: Using GIC for interrupt routing Mar 12 00:43:18.198676 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 12 00:43:18.198684 kernel: printk: console [ttyAMA0] enabled Mar 12 00:43:18.198691 kernel: printk: bootconsole [pl11] disabled Mar 12 00:43:18.198700 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 12 00:43:18.198707 kernel: iommu: Default domain type: Translated Mar 12 00:43:18.198715 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 00:43:18.198722 kernel: efivars: Registered efivars operations Mar 12 00:43:18.198729 kernel: vgaarb: loaded Mar 12 00:43:18.198737 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 00:43:18.198744 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 00:43:18.198751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 00:43:18.198759 kernel: pnp: PnP ACPI init Mar 12 00:43:18.198768 kernel: pnp: PnP ACPI: found 0 devices Mar 12 00:43:18.198775 kernel: NET: Registered PF_INET protocol family Mar 12 00:43:18.198783 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 00:43:18.198791 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 00:43:18.198798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 00:43:18.198806 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 00:43:18.198813 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 00:43:18.198821 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 00:43:18.198828 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198837 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 00:43:18.198852 kernel: PCI: CLS 0 bytes, default 64 Mar 12 00:43:18.198859 kernel: kvm [1]: HYP mode not available Mar 12 00:43:18.198867 kernel: Initialise system trusted keyrings Mar 12 00:43:18.198874 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 00:43:18.198881 kernel: Key type asymmetric registered Mar 12 00:43:18.198889 kernel: Asymmetric key parser 'x509' registered Mar 12 00:43:18.198896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 12 00:43:18.198905 kernel: io scheduler mq-deadline registered Mar 12 00:43:18.198912 kernel: io scheduler kyber registered Mar 12 00:43:18.198920 kernel: io scheduler bfq registered Mar 12 00:43:18.198927 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 00:43:18.198934 kernel: thunder_xcv, ver 1.0 Mar 12 00:43:18.198942 kernel: thunder_bgx, ver 1.0 Mar 12 00:43:18.198949 kernel: nicpf, ver 1.0 Mar 12 00:43:18.198957 kernel: nicvf, ver 1.0 Mar 12 00:43:18.199106 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 00:43:18.199181 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T00:43:17 UTC (1773276197) Mar 12 00:43:18.199191 kernel: efifb: probing for efifb Mar 12 00:43:18.199199 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 12 00:43:18.199207 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 12 00:43:18.199214 kernel: efifb: scrolling: redraw Mar 12 00:43:18.199221 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 00:43:18.199229 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 00:43:18.199236 kernel: fb0: EFI VGA frame buffer device Mar 12 00:43:18.199246 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 12 00:43:18.199254 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 00:43:18.199261 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 12 00:43:18.199269 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 12 00:43:18.199276 kernel: watchdog: Hard watchdog permanently disabled Mar 12 00:43:18.199283 kernel: NET: Registered PF_INET6 protocol family Mar 12 00:43:18.199291 kernel: Segment Routing with IPv6 Mar 12 00:43:18.199299 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 00:43:18.199306 kernel: NET: Registered PF_PACKET protocol family Mar 12 00:43:18.199315 kernel: Key type dns_resolver registered Mar 12 00:43:18.199323 kernel: registered taskstats version 1 Mar 12 00:43:18.199330 kernel: Loading compiled-in X.509 certificates Mar 12 00:43:18.199338 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6ab5db9410a4806f8dbbb4c495dcd79d4faba591' Mar 12 00:43:18.199345 kernel: Key type .fscrypt registered Mar 12 00:43:18.199352 kernel: Key type fscrypt-provisioning registered Mar 12 00:43:18.199360 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 00:43:18.199367 kernel: ima: Allocated hash algorithm: sha1 Mar 12 00:43:18.199374 kernel: ima: No architecture policies found Mar 12 00:43:18.199384 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 00:43:18.199391 kernel: clk: Disabling unused clocks Mar 12 00:43:18.199398 kernel: Freeing unused kernel memory: 39424K Mar 12 00:43:18.199406 kernel: Run /init as init process Mar 12 00:43:18.199413 kernel: with arguments: Mar 12 00:43:18.199420 kernel: /init Mar 12 00:43:18.199427 kernel: with environment: Mar 12 00:43:18.199435 kernel: HOME=/ Mar 12 00:43:18.199442 kernel: TERM=linux Mar 12 00:43:18.199451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 00:43:18.199462 systemd[1]: Detected virtualization microsoft. Mar 12 00:43:18.199471 systemd[1]: Detected architecture arm64. Mar 12 00:43:18.199478 systemd[1]: Running in initrd. Mar 12 00:43:18.199486 systemd[1]: No hostname configured, using default hostname. Mar 12 00:43:18.199494 systemd[1]: Hostname set to . Mar 12 00:43:18.199502 systemd[1]: Initializing machine ID from random generator. Mar 12 00:43:18.199512 systemd[1]: Queued start job for default target initrd.target. Mar 12 00:43:18.199520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 00:43:18.199528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 00:43:18.199537 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 00:43:18.199545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 00:43:18.199553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 00:43:18.199561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 00:43:18.199570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 00:43:18.199580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 00:43:18.199589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 00:43:18.199597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 00:43:18.199605 systemd[1]: Reached target paths.target - Path Units. Mar 12 00:43:18.199613 systemd[1]: Reached target slices.target - Slice Units. Mar 12 00:43:18.199621 systemd[1]: Reached target swap.target - Swaps. Mar 12 00:43:18.199629 systemd[1]: Reached target timers.target - Timer Units. Mar 12 00:43:18.199637 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 00:43:18.199647 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 00:43:18.199666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 00:43:18.199676 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 00:43:18.199685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 00:43:18.199693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 00:43:18.199701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 00:43:18.199709 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 00:43:18.199717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 00:43:18.199727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 00:43:18.199736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 00:43:18.199744 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 00:43:18.199752 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 00:43:18.199760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 00:43:18.199786 systemd-journald[217]: Collecting audit messages is disabled. Mar 12 00:43:18.199808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:18.199816 systemd-journald[217]: Journal started Mar 12 00:43:18.199835 systemd-journald[217]: Runtime Journal (/run/log/journal/5767eeab65b8410695fb3c39ae93cf58) is 8.0M, max 78.5M, 70.5M free. Mar 12 00:43:18.201839 systemd-modules-load[218]: Inserted module 'overlay' Mar 12 00:43:18.215432 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 00:43:18.221685 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 00:43:18.240248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 00:43:18.240279 kernel: Bridge firewalling registered Mar 12 00:43:18.236017 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 12 00:43:18.237012 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 00:43:18.245586 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 00:43:18.253674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 00:43:18.261894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:18.280907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:18.293858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 00:43:18.305614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 00:43:18.329967 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 00:43:18.342713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:18.351177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 00:43:18.361715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 00:43:18.370865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 00:43:18.392926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 00:43:18.400843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 00:43:18.417444 dracut-cmdline[252]: dracut-dracut-053 Mar 12 00:43:18.430071 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ea6452ef6439cfe8258f05036d5ce6d908887775193e3b46c412fba933f4f4f3 Mar 12 00:43:18.419901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 00:43:18.455732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 00:43:18.478046 systemd-resolved[255]: Positive Trust Anchors: Mar 12 00:43:18.478060 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 00:43:18.478091 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 00:43:18.480381 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 12 00:43:18.481358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 00:43:18.488151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 00:43:18.574676 kernel: SCSI subsystem initialized Mar 12 00:43:18.582668 kernel: Loading iSCSI transport class v2.0-870. Mar 12 00:43:18.590674 kernel: iscsi: registered transport (tcp) Mar 12 00:43:18.606961 kernel: iscsi: registered transport (qla4xxx) Mar 12 00:43:18.606992 kernel: QLogic iSCSI HBA Driver Mar 12 00:43:18.645817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 00:43:18.665777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 00:43:18.693801 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 00:43:18.693863 kernel: device-mapper: uevent: version 1.0.3 Mar 12 00:43:18.698920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 00:43:18.746673 kernel: raid6: neonx8 gen() 15795 MB/s Mar 12 00:43:18.765664 kernel: raid6: neonx4 gen() 15704 MB/s Mar 12 00:43:18.784672 kernel: raid6: neonx2 gen() 13256 MB/s Mar 12 00:43:18.804665 kernel: raid6: neonx1 gen() 10489 MB/s Mar 12 00:43:18.823662 kernel: raid6: int64x8 gen() 6982 MB/s Mar 12 00:43:18.842662 kernel: raid6: int64x4 gen() 7360 MB/s Mar 12 00:43:18.862662 kernel: raid6: int64x2 gen() 6146 MB/s Mar 12 00:43:18.884549 kernel: raid6: int64x1 gen() 5072 MB/s Mar 12 00:43:18.884559 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Mar 12 00:43:18.907121 kernel: raid6: .... xor() 12052 MB/s, rmw enabled Mar 12 00:43:18.907140 kernel: raid6: using neon recovery algorithm Mar 12 00:43:18.916724 kernel: xor: measuring software checksum speed Mar 12 00:43:18.916744 kernel: 8regs : 19773 MB/sec Mar 12 00:43:18.920429 kernel: 32regs : 19679 MB/sec Mar 12 00:43:18.926224 kernel: arm64_neon : 26230 MB/sec Mar 12 00:43:18.926235 kernel: xor: using function: arm64_neon (26230 MB/sec) Mar 12 00:43:18.975670 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 00:43:18.985614 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 00:43:18.997794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 00:43:19.017588 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 12 00:43:19.021748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 00:43:19.036854 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 00:43:19.053581 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 12 00:43:19.084493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 00:43:19.096850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 00:43:19.133599 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 00:43:19.149897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 00:43:19.171548 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 00:43:19.181613 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 00:43:19.196856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 00:43:19.212948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 00:43:19.234886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 00:43:19.253687 kernel: hv_vmbus: Vmbus version:5.3 Mar 12 00:43:19.257618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 00:43:19.271577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 00:43:19.271750 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:19.297976 kernel: hv_vmbus: registering driver hv_netvsc Mar 12 00:43:19.294884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:19.320471 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 12 00:43:19.320492 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 12 00:43:19.320508 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 12 00:43:19.309802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:19.333359 kernel: hv_vmbus: registering driver hid_hyperv Mar 12 00:43:19.310040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.352927 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 12 00:43:19.352950 kernel: PTP clock support registered Mar 12 00:43:19.325365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.298967 kernel: hv_utils: Registering HyperV Utility Driver Mar 12 00:43:19.304500 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 12 00:43:19.304515 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 12 00:43:19.304633 kernel: hv_vmbus: registering driver hv_utils Mar 12 00:43:19.304642 kernel: hv_utils: Heartbeat IC version 3.0 Mar 12 00:43:19.304650 kernel: hv_utils: Shutdown IC version 3.2 Mar 12 00:43:19.304659 kernel: hv_utils: TimeSync IC version 4.0 Mar 12 00:43:19.304666 kernel: hv_vmbus: registering driver hv_storvsc Mar 12 00:43:19.304674 kernel: scsi host0: storvsc_host_t Mar 12 00:43:19.304777 kernel: scsi host1: storvsc_host_t Mar 12 00:43:19.304868 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 12 00:43:19.304888 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: VF slot 1 added Mar 12 00:43:19.304973 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 12 00:43:19.304989 systemd-journald[217]: Time jumped backwards, rotating. Mar 12 00:43:19.261512 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 12 00:43:19.262633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.324853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:19.347339 kernel: hv_vmbus: registering driver hv_pci Mar 12 00:43:19.347359 kernel: hv_pci a423e0bb-a430-4026-91eb-a7057e8c768e: PCI VMBus probing: Using version 0x10004 Mar 12 00:43:19.347529 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 12 00:43:19.324969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.360949 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 00:43:19.361108 kernel: hv_pci a423e0bb-a430-4026-91eb-a7057e8c768e: PCI host bridge to bus a430:00 Mar 12 00:43:19.346678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.373439 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 12 00:43:19.378206 kernel: pci_bus a430:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 12 00:43:19.378313 kernel: pci_bus a430:00: No busn resource found for root bus, will use [bus 00-ff] Mar 12 00:43:19.396968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 00:43:19.397187 kernel: pci a430:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 12 00:43:19.406679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.436360 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 12 00:43:19.444468 kernel: pci a430:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 00:43:19.444494 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 12 00:43:19.444587 kernel: pci a430:00:02.0: enabling Extended Tags Mar 12 00:43:19.444602 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 12 00:43:19.434246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:19.468175 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 12 00:43:19.468337 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 12 00:43:19.468435 kernel: pci a430:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a430:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 12 00:43:19.476398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:19.476447 kernel: pci_bus a430:00: busn_res: [bus 00-ff] end is updated to 00 Mar 12 00:43:19.480843 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 12 00:43:19.481008 kernel: pci a430:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 00:43:19.489424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 00:43:19.495279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:19.541371 kernel: mlx5_core a430:00:02.0: enabling device (0000 -> 0002) Mar 12 00:43:19.547384 kernel: mlx5_core a430:00:02.0: firmware version: 16.30.5026 Mar 12 00:43:19.742691 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: VF registering: eth1 Mar 12 00:43:19.742884 kernel: mlx5_core a430:00:02.0 eth1: joined to eth0 Mar 12 00:43:19.748029 kernel: mlx5_core a430:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 12 00:43:19.758402 kernel: mlx5_core a430:00:02.0 enP42032s1: renamed from eth1 Mar 12 00:43:20.061397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 12 00:43:20.085397 kernel: BTRFS: device fsid b3e5ce46-39d1-4da5-be73-65a819b9939b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (504) Mar 12 00:43:20.097170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 12 00:43:20.112406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 12 00:43:20.117690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 12 00:43:20.143559 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Mar 12 00:43:20.145600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 00:43:20.169922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 00:43:20.179789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:20.189401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:20.198397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:21.199483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:21.199532 disk-uuid[605]: The operation has completed successfully. Mar 12 00:43:21.261768 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 00:43:21.261858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 00:43:21.299542 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 00:43:21.309558 sh[718]: Success Mar 12 00:43:21.349405 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 12 00:43:21.627888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 00:43:21.638530 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 00:43:21.653847 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 00:43:21.686260 kernel: BTRFS info (device dm-0): first mount of filesystem b3e5ce46-39d1-4da5-be73-65a819b9939b Mar 12 00:43:21.686289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:21.686299 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 00:43:21.686308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 00:43:21.686316 kernel: BTRFS info (device dm-0): using free space tree Mar 12 00:43:21.965100 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 00:43:21.969226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 00:43:21.986670 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 00:43:21.992553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 00:43:22.028392 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:22.028448 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:22.032212 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:22.068964 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:22.081841 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 00:43:22.086449 kernel: BTRFS info (device sda6): last unmount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:22.093806 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 00:43:22.111896 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 00:43:22.122492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 00:43:22.140537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 00:43:22.164832 systemd-networkd[906]: lo: Link UP Mar 12 00:43:22.164841 systemd-networkd[906]: lo: Gained carrier Mar 12 00:43:22.166469 systemd-networkd[906]: Enumeration completed Mar 12 00:43:22.167722 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 00:43:22.168154 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:22.168158 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 00:43:22.172736 systemd[1]: Reached target network.target - Network. Mar 12 00:43:22.251401 kernel: mlx5_core a430:00:02.0 enP42032s1: Link up Mar 12 00:43:22.289397 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: Data path switched to VF: enP42032s1 Mar 12 00:43:22.289677 systemd-networkd[906]: enP42032s1: Link UP Mar 12 00:43:22.289768 systemd-networkd[906]: eth0: Link UP Mar 12 00:43:22.289864 systemd-networkd[906]: eth0: Gained carrier Mar 12 00:43:22.289873 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:22.308924 systemd-networkd[906]: enP42032s1: Gained carrier Mar 12 00:43:22.319409 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 00:43:23.136945 ignition[891]: Ignition 2.19.0 Mar 12 00:43:23.136955 ignition[891]: Stage: fetch-offline Mar 12 00:43:23.140039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 00:43:23.136992 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.137000 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.137095 ignition[891]: parsed url from cmdline: "" Mar 12 00:43:23.137098 ignition[891]: no config URL provided Mar 12 00:43:23.137103 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 00:43:23.137109 ignition[891]: no config at "/usr/lib/ignition/user.ign" Mar 12 00:43:23.165617 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 00:43:23.137114 ignition[891]: failed to fetch config: resource requires networking Mar 12 00:43:23.137281 ignition[891]: Ignition finished successfully Mar 12 00:43:23.183317 ignition[915]: Ignition 2.19.0 Mar 12 00:43:23.183324 ignition[915]: Stage: fetch Mar 12 00:43:23.183512 ignition[915]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.183520 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.183610 ignition[915]: parsed url from cmdline: "" Mar 12 00:43:23.183613 ignition[915]: no config URL provided Mar 12 00:43:23.183618 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 00:43:23.183625 ignition[915]: no config at "/usr/lib/ignition/user.ign" Mar 12 00:43:23.183646 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 12 00:43:23.279215 ignition[915]: GET result: OK Mar 12 00:43:23.279299 ignition[915]: config has been read from IMDS userdata Mar 12 00:43:23.279342 ignition[915]: parsing config with SHA512: 7a5bdb39ef8c127730b47d365706322b57369c559770441002ad1c9ce21218709fca455995234aabcc894eff697da6a32804bfbcf6c2885883750c72c914414a Mar 12 00:43:23.283045 unknown[915]: fetched base config from "system" Mar 12 00:43:23.283742 ignition[915]: fetch: fetch complete Mar 12 00:43:23.283052 unknown[915]: fetched base config from "system" Mar 12 00:43:23.283747 ignition[915]: fetch: fetch passed Mar 12 00:43:23.283057 unknown[915]: fetched user config from "azure" Mar 12 00:43:23.283804 ignition[915]: Ignition finished successfully Mar 12 00:43:23.285522 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 00:43:23.299578 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 00:43:23.323818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 00:43:23.319796 ignition[921]: Ignition 2.19.0 Mar 12 00:43:23.319802 ignition[921]: Stage: kargs Mar 12 00:43:23.320029 ignition[921]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.320038 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.340520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 00:43:23.321318 ignition[921]: kargs: kargs passed Mar 12 00:43:23.321370 ignition[921]: Ignition finished successfully Mar 12 00:43:23.363312 ignition[927]: Ignition 2.19.0 Mar 12 00:43:23.363321 ignition[927]: Stage: disks Mar 12 00:43:23.367878 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 00:43:23.363507 ignition[927]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.373893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 00:43:23.363517 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.383168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 00:43:23.364362 ignition[927]: disks: disks passed Mar 12 00:43:23.392546 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 00:43:23.364418 ignition[927]: Ignition finished successfully Mar 12 00:43:23.402292 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 00:43:23.412247 systemd[1]: Reached target basic.target - Basic System. Mar 12 00:43:23.431604 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 00:43:23.503051 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 12 00:43:23.511661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 00:43:23.524541 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 00:43:23.576639 kernel: EXT4-fs (sda9): mounted filesystem c38155a0-efb1-4bf7-a848-c3aca111ae6d r/w with ordered data mode. Quota mode: none. Mar 12 00:43:23.578036 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 00:43:23.584984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 00:43:23.623444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 00:43:23.641389 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Mar 12 00:43:23.652066 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:23.652084 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:23.656252 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:23.663390 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:23.663477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 00:43:23.671529 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 00:43:23.678675 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 00:43:23.678711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 00:43:23.689664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 00:43:23.703084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 00:43:23.720607 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 00:43:24.061471 systemd-networkd[906]: eth0: Gained IPv6LL Mar 12 00:43:24.434507 coreos-metadata[963]: Mar 12 00:43:24.434 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 00:43:24.442315 coreos-metadata[963]: Mar 12 00:43:24.442 INFO Fetch successful Mar 12 00:43:24.442315 coreos-metadata[963]: Mar 12 00:43:24.442 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 12 00:43:24.456754 coreos-metadata[963]: Mar 12 00:43:24.456 INFO Fetch successful Mar 12 00:43:24.472427 coreos-metadata[963]: Mar 12 00:43:24.471 INFO wrote hostname ci-4081.3.6-n-d10d02cd33 to /sysroot/etc/hostname Mar 12 00:43:24.480566 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 00:43:24.679812 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 00:43:24.716225 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Mar 12 00:43:24.724054 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 00:43:24.731317 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 00:43:25.742000 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 00:43:25.759808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 00:43:25.775386 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 00:43:25.782789 kernel: BTRFS info (device sda6): last unmount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:25.785446 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 00:43:25.805344 ignition[1065]: INFO : Ignition 2.19.0 Mar 12 00:43:25.810064 ignition[1065]: INFO : Stage: mount Mar 12 00:43:25.812927 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:25.812927 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:25.812927 ignition[1065]: INFO : mount: mount passed Mar 12 00:43:25.812927 ignition[1065]: INFO : Ignition finished successfully Mar 12 00:43:25.811139 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 00:43:25.817805 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 00:43:25.838495 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 00:43:25.855884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 00:43:25.873405 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 12 00:43:25.884029 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:25.884067 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:25.887259 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:25.896399 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:25.895723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 00:43:25.921779 ignition[1093]: INFO : Ignition 2.19.0 Mar 12 00:43:25.921779 ignition[1093]: INFO : Stage: files Mar 12 00:43:25.927841 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:25.927841 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:25.927841 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Mar 12 00:43:25.958484 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 00:43:25.958484 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 00:43:26.045173 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 00:43:26.050876 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 00:43:26.050876 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 00:43:26.045586 unknown[1093]: wrote ssh authorized keys file for user: core Mar 12 00:43:26.077579 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 00:43:26.086311 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 00:43:26.179963 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 00:43:26.426500 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 00:43:26.426500 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 12 00:43:27.060231 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 00:43:27.548369 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:27.548369 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 00:43:27.578252 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: files passed Mar 12 00:43:27.587731 ignition[1093]: INFO : Ignition finished successfully Mar 12 00:43:27.588289 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 00:43:27.617621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 00:43:27.631538 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 00:43:27.647157 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 00:43:27.648399 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 00:43:27.681279 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.689431 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.689431 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.683347 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 00:43:27.694040 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 00:43:27.717655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 00:43:27.750136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 00:43:27.750260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 00:43:27.760506 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 00:43:27.770915 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 00:43:27.779304 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 00:43:27.790857 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 00:43:27.809665 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 00:43:27.821646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 00:43:27.839230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 00:43:27.844336 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 00:43:27.853884 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 00:43:27.862209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 00:43:27.862337 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 00:43:27.874843 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 00:43:27.884304 systemd[1]: Stopped target basic.target - Basic System. Mar 12 00:43:27.892198 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 00:43:27.900528 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 00:43:27.909712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 00:43:27.919188 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 00:43:27.928012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 00:43:27.937564 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 00:43:27.947310 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 00:43:27.955495 systemd[1]: Stopped target swap.target - Swaps. Mar 12 00:43:27.963001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 00:43:27.963167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 00:43:27.975178 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 00:43:27.984132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 00:43:27.993895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 00:43:27.998285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 00:43:28.003565 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 00:43:28.003724 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 00:43:28.017430 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 00:43:28.017592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 00:43:28.026932 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 00:43:28.027086 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 00:43:28.035347 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 00:43:28.035498 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 00:43:28.062471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 00:43:28.078031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 00:43:28.092669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 00:43:28.104763 ignition[1146]: INFO : Ignition 2.19.0 Mar 12 00:43:28.104763 ignition[1146]: INFO : Stage: umount Mar 12 00:43:28.104763 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:28.104763 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:28.104763 ignition[1146]: INFO : umount: umount passed Mar 12 00:43:28.104763 ignition[1146]: INFO : Ignition finished successfully Mar 12 00:43:28.092828 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 00:43:28.099918 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 00:43:28.100028 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 00:43:28.118640 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 00:43:28.118750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 00:43:28.132628 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 00:43:28.132904 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 00:43:28.140920 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 00:43:28.140973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 00:43:28.149317 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 00:43:28.149361 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 00:43:28.158601 systemd[1]: Stopped target network.target - Network. Mar 12 00:43:28.166722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 00:43:28.166772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 00:43:28.175832 systemd[1]: Stopped target paths.target - Path Units. Mar 12 00:43:28.183527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 00:43:28.189403 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 00:43:28.196599 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 00:43:28.204701 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 00:43:28.213181 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 00:43:28.213231 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 00:43:28.220866 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 00:43:28.220903 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 00:43:28.232660 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 00:43:28.232715 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 00:43:28.240567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 00:43:28.240607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 00:43:28.249242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 00:43:28.258203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 00:43:28.266603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 00:43:28.267153 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 00:43:28.267244 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 00:43:28.269687 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 12 00:43:28.276452 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 00:43:28.276557 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 00:43:28.288143 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 00:43:28.288237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 00:43:28.299435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 00:43:28.299492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 00:43:28.315512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 00:43:28.464789 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: Data path switched from VF: enP42032s1 Mar 12 00:43:28.323129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 00:43:28.323213 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 00:43:28.333354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 00:43:28.333417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 00:43:28.341768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 00:43:28.341814 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 00:43:28.349912 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 00:43:28.349951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 00:43:28.359145 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 00:43:28.384922 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 00:43:28.385065 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 00:43:28.405211 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 00:43:28.405277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 00:43:28.413833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 00:43:28.413876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 00:43:28.422574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 00:43:28.422626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 00:43:28.435648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 00:43:28.435703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 00:43:28.459277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 00:43:28.459364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:28.480635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 00:43:28.494201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 00:43:28.494283 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 00:43:28.499800 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 00:43:28.499850 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 00:43:28.509354 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 00:43:28.509401 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 00:43:28.525439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:28.525510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:28.536213 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 00:43:28.536312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 00:43:28.544998 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 00:43:28.545075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 00:43:28.556801 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 00:43:28.556882 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 00:43:28.566176 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 00:43:28.574652 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 00:43:28.574754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 00:43:28.596563 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 00:43:28.678229 systemd[1]: Switching root. Mar 12 00:43:28.773978 systemd-journald[217]: Journal stopped Mar 12 00:43:18.195321 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 12 00:43:18.195342 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Mar 11 22:42:47 -00 2026 Mar 12 00:43:18.195350 kernel: KASLR enabled Mar 12 00:43:18.195356 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 12 00:43:18.195364 kernel: printk: bootconsole [pl11] enabled Mar 12 00:43:18.195369 kernel: efi: EFI v2.7 by EDK II Mar 12 00:43:18.195377 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 12 00:43:18.195383 kernel: random: crng init done Mar 12 00:43:18.195389 kernel: ACPI: Early table checksum verification disabled Mar 12 00:43:18.195395 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 12 00:43:18.195401 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195407 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195414 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 12 00:43:18.195421 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195428 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195434 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195441 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195449 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195456 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195462 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 12 00:43:18.195469 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 12 00:43:18.195475 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 12 00:43:18.195482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 12 00:43:18.195488 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 12 00:43:18.195494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 12 00:43:18.195501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 12 00:43:18.195507 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 12 00:43:18.195513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 12 00:43:18.195521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 12 00:43:18.195527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 12 00:43:18.195534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 12 00:43:18.195540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 12 00:43:18.195547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 12 00:43:18.195553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 12 00:43:18.195559 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 12 00:43:18.195565 kernel: Zone ranges: Mar 12 00:43:18.195572 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 12 00:43:18.195578 kernel: DMA32 empty Mar 12 00:43:18.195584 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 00:43:18.195591 kernel: Movable zone start for each node Mar 12 00:43:18.195602 kernel: Early memory node ranges Mar 12 00:43:18.195608 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 12 00:43:18.195615 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 12 00:43:18.195622 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 12 00:43:18.195629 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 12 00:43:18.195637 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 12 00:43:18.195644 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 12 00:43:18.195651 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 12 00:43:18.197761 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 12 00:43:18.197775 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 12 00:43:18.197782 kernel: psci: probing for conduit method from ACPI. Mar 12 00:43:18.197789 kernel: psci: PSCIv1.1 detected in firmware. Mar 12 00:43:18.197796 kernel: psci: Using standard PSCI v0.2 function IDs Mar 12 00:43:18.197803 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 12 00:43:18.197810 kernel: psci: SMC Calling Convention v1.4 Mar 12 00:43:18.197817 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 12 00:43:18.197824 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 12 00:43:18.197836 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 12 00:43:18.197843 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 12 00:43:18.197850 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 12 00:43:18.197857 kernel: Detected PIPT I-cache on CPU0 Mar 12 00:43:18.197864 kernel: CPU features: detected: GIC system register CPU interface Mar 12 00:43:18.197871 kernel: CPU features: detected: Hardware dirty bit management Mar 12 00:43:18.197877 kernel: CPU features: detected: Spectre-BHB Mar 12 00:43:18.197884 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 12 00:43:18.197891 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 12 00:43:18.197898 kernel: CPU features: detected: ARM erratum 1418040 Mar 12 00:43:18.197905 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 12 00:43:18.197913 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 12 00:43:18.197920 kernel: alternatives: applying boot alternatives Mar 12 00:43:18.197929 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ea6452ef6439cfe8258f05036d5ce6d908887775193e3b46c412fba933f4f4f3 Mar 12 00:43:18.197936 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 00:43:18.197943 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 00:43:18.197950 kernel: Fallback order for Node 0: 0 Mar 12 00:43:18.197956 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 12 00:43:18.197963 kernel: Policy zone: Normal Mar 12 00:43:18.197970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 00:43:18.197977 kernel: software IO TLB: area num 2. Mar 12 00:43:18.197984 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 12 00:43:18.197992 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 12 00:43:18.198000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 12 00:43:18.198006 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 00:43:18.198014 kernel: rcu: RCU event tracing is enabled. Mar 12 00:43:18.198022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 12 00:43:18.198029 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 00:43:18.198036 kernel: Tracing variant of Tasks RCU enabled. Mar 12 00:43:18.198042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 00:43:18.198050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 12 00:43:18.198056 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 12 00:43:18.198063 kernel: GICv3: 960 SPIs implemented Mar 12 00:43:18.198071 kernel: GICv3: 0 Extended SPIs implemented Mar 12 00:43:18.198078 kernel: Root IRQ handler: gic_handle_irq Mar 12 00:43:18.198085 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 12 00:43:18.198092 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 12 00:43:18.198099 kernel: ITS: No ITS available, not enabling LPIs Mar 12 00:43:18.198106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 00:43:18.198113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 00:43:18.198120 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 12 00:43:18.198127 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 12 00:43:18.198134 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 12 00:43:18.198141 kernel: Console: colour dummy device 80x25 Mar 12 00:43:18.198149 kernel: printk: console [tty1] enabled Mar 12 00:43:18.198157 kernel: ACPI: Core revision 20230628 Mar 12 00:43:18.198164 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 12 00:43:18.198171 kernel: pid_max: default: 32768 minimum: 301 Mar 12 00:43:18.198178 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 00:43:18.198186 kernel: landlock: Up and running. Mar 12 00:43:18.198193 kernel: SELinux: Initializing. Mar 12 00:43:18.198200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198207 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198216 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 00:43:18.198223 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 12 00:43:18.198230 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 12 00:43:18.198237 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 12 00:43:18.198244 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 12 00:43:18.198251 kernel: rcu: Hierarchical SRCU implementation. Mar 12 00:43:18.198258 kernel: rcu: Max phase no-delay instances is 400. Mar 12 00:43:18.198266 kernel: Remapping and enabling EFI services. Mar 12 00:43:18.198279 kernel: smp: Bringing up secondary CPUs ... Mar 12 00:43:18.198286 kernel: Detected PIPT I-cache on CPU1 Mar 12 00:43:18.198293 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 12 00:43:18.198301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 12 00:43:18.198309 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 12 00:43:18.198317 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 00:43:18.198324 kernel: SMP: Total of 2 processors activated. Mar 12 00:43:18.198332 kernel: CPU features: detected: 32-bit EL0 Support Mar 12 00:43:18.198339 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 12 00:43:18.198348 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 12 00:43:18.198356 kernel: CPU features: detected: CRC32 instructions Mar 12 00:43:18.198363 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 12 00:43:18.198371 kernel: CPU features: detected: LSE atomic instructions Mar 12 00:43:18.198378 kernel: CPU features: detected: Privileged Access Never Mar 12 00:43:18.198386 kernel: CPU: All CPU(s) started at EL1 Mar 12 00:43:18.198393 kernel: alternatives: applying system-wide alternatives Mar 12 00:43:18.198401 kernel: devtmpfs: initialized Mar 12 00:43:18.198408 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 00:43:18.198417 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 12 00:43:18.198425 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 00:43:18.198432 kernel: SMBIOS 3.1.0 present. Mar 12 00:43:18.198440 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 12 00:43:18.198448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 00:43:18.198455 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 12 00:43:18.198463 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 12 00:43:18.198470 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 12 00:43:18.198478 kernel: audit: initializing netlink subsys (disabled) Mar 12 00:43:18.198487 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 12 00:43:18.198494 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 00:43:18.198502 kernel: cpuidle: using governor menu Mar 12 00:43:18.198509 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 12 00:43:18.198517 kernel: ASID allocator initialised with 32768 entries Mar 12 00:43:18.198524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 00:43:18.198531 kernel: Serial: AMBA PL011 UART driver Mar 12 00:43:18.198539 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 12 00:43:18.198546 kernel: Modules: 0 pages in range for non-PLT usage Mar 12 00:43:18.198555 kernel: Modules: 509008 pages in range for PLT usage Mar 12 00:43:18.198563 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 00:43:18.198570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 00:43:18.198577 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 12 00:43:18.198585 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 12 00:43:18.198592 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 00:43:18.198600 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 00:43:18.198607 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 12 00:43:18.198615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 12 00:43:18.198624 kernel: ACPI: Added _OSI(Module Device) Mar 12 00:43:18.198631 kernel: ACPI: Added _OSI(Processor Device) Mar 12 00:43:18.198639 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 00:43:18.198646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 00:43:18.198653 kernel: ACPI: Interpreter enabled Mar 12 00:43:18.198669 kernel: ACPI: Using GIC for interrupt routing Mar 12 00:43:18.198676 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 12 00:43:18.198684 kernel: printk: console [ttyAMA0] enabled Mar 12 00:43:18.198691 kernel: printk: bootconsole [pl11] disabled Mar 12 00:43:18.198700 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 12 00:43:18.198707 kernel: iommu: Default domain type: Translated Mar 12 00:43:18.198715 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 12 00:43:18.198722 kernel: efivars: Registered efivars operations Mar 12 00:43:18.198729 kernel: vgaarb: loaded Mar 12 00:43:18.198737 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 12 00:43:18.198744 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 00:43:18.198751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 00:43:18.198759 kernel: pnp: PnP ACPI init Mar 12 00:43:18.198768 kernel: pnp: PnP ACPI: found 0 devices Mar 12 00:43:18.198775 kernel: NET: Registered PF_INET protocol family Mar 12 00:43:18.198783 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 00:43:18.198791 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 00:43:18.198798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 00:43:18.198806 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 00:43:18.198813 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 00:43:18.198821 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 00:43:18.198828 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198837 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 00:43:18.198845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 00:43:18.198852 kernel: PCI: CLS 0 bytes, default 64 Mar 12 00:43:18.198859 kernel: kvm [1]: HYP mode not available Mar 12 00:43:18.198867 kernel: Initialise system trusted keyrings Mar 12 00:43:18.198874 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 00:43:18.198881 kernel: Key type asymmetric registered Mar 12 00:43:18.198889 kernel: Asymmetric key parser 'x509' registered Mar 12 00:43:18.198896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 12 00:43:18.198905 kernel: io scheduler mq-deadline registered Mar 12 00:43:18.198912 kernel: io scheduler kyber registered Mar 12 00:43:18.198920 kernel: io scheduler bfq registered Mar 12 00:43:18.198927 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 00:43:18.198934 kernel: thunder_xcv, ver 1.0 Mar 12 00:43:18.198942 kernel: thunder_bgx, ver 1.0 Mar 12 00:43:18.198949 kernel: nicpf, ver 1.0 Mar 12 00:43:18.198957 kernel: nicvf, ver 1.0 Mar 12 00:43:18.199106 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 12 00:43:18.199181 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-12T00:43:17 UTC (1773276197) Mar 12 00:43:18.199191 kernel: efifb: probing for efifb Mar 12 00:43:18.199199 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 12 00:43:18.199207 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 12 00:43:18.199214 kernel: efifb: scrolling: redraw Mar 12 00:43:18.199221 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 12 00:43:18.199229 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 00:43:18.199236 kernel: fb0: EFI VGA frame buffer device Mar 12 00:43:18.199246 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 12 00:43:18.199254 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 00:43:18.199261 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 12 00:43:18.199269 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 12 00:43:18.199276 kernel: watchdog: Hard watchdog permanently disabled Mar 12 00:43:18.199283 kernel: NET: Registered PF_INET6 protocol family Mar 12 00:43:18.199291 kernel: Segment Routing with IPv6 Mar 12 00:43:18.199299 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 00:43:18.199306 kernel: NET: Registered PF_PACKET protocol family Mar 12 00:43:18.199315 kernel: Key type dns_resolver registered Mar 12 00:43:18.199323 kernel: registered taskstats version 1 Mar 12 00:43:18.199330 kernel: Loading compiled-in X.509 certificates Mar 12 00:43:18.199338 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6ab5db9410a4806f8dbbb4c495dcd79d4faba591' Mar 12 00:43:18.199345 kernel: Key type .fscrypt registered Mar 12 00:43:18.199352 kernel: Key type fscrypt-provisioning registered Mar 12 00:43:18.199360 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 00:43:18.199367 kernel: ima: Allocated hash algorithm: sha1 Mar 12 00:43:18.199374 kernel: ima: No architecture policies found Mar 12 00:43:18.199384 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 12 00:43:18.199391 kernel: clk: Disabling unused clocks Mar 12 00:43:18.199398 kernel: Freeing unused kernel memory: 39424K Mar 12 00:43:18.199406 kernel: Run /init as init process Mar 12 00:43:18.199413 kernel: with arguments: Mar 12 00:43:18.199420 kernel: /init Mar 12 00:43:18.199427 kernel: with environment: Mar 12 00:43:18.199435 kernel: HOME=/ Mar 12 00:43:18.199442 kernel: TERM=linux Mar 12 00:43:18.199451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 00:43:18.199462 systemd[1]: Detected virtualization microsoft. Mar 12 00:43:18.199471 systemd[1]: Detected architecture arm64. Mar 12 00:43:18.199478 systemd[1]: Running in initrd. Mar 12 00:43:18.199486 systemd[1]: No hostname configured, using default hostname. Mar 12 00:43:18.199494 systemd[1]: Hostname set to . Mar 12 00:43:18.199502 systemd[1]: Initializing machine ID from random generator. Mar 12 00:43:18.199512 systemd[1]: Queued start job for default target initrd.target. Mar 12 00:43:18.199520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 00:43:18.199528 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 00:43:18.199537 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 00:43:18.199545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 00:43:18.199553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 00:43:18.199561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 00:43:18.199570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 00:43:18.199580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 00:43:18.199589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 00:43:18.199597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 00:43:18.199605 systemd[1]: Reached target paths.target - Path Units. Mar 12 00:43:18.199613 systemd[1]: Reached target slices.target - Slice Units. Mar 12 00:43:18.199621 systemd[1]: Reached target swap.target - Swaps. Mar 12 00:43:18.199629 systemd[1]: Reached target timers.target - Timer Units. Mar 12 00:43:18.199637 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 00:43:18.199647 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 00:43:18.199666 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 00:43:18.199676 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 00:43:18.199685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 00:43:18.199693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 00:43:18.199701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 00:43:18.199709 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 00:43:18.199717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 00:43:18.199727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 00:43:18.199736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 00:43:18.199744 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 00:43:18.199752 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 00:43:18.199760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 00:43:18.199786 systemd-journald[217]: Collecting audit messages is disabled. Mar 12 00:43:18.199808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:18.199816 systemd-journald[217]: Journal started Mar 12 00:43:18.199835 systemd-journald[217]: Runtime Journal (/run/log/journal/5767eeab65b8410695fb3c39ae93cf58) is 8.0M, max 78.5M, 70.5M free. Mar 12 00:43:18.201839 systemd-modules-load[218]: Inserted module 'overlay' Mar 12 00:43:18.215432 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 00:43:18.221685 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 00:43:18.240248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 00:43:18.240279 kernel: Bridge firewalling registered Mar 12 00:43:18.236017 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 12 00:43:18.237012 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 00:43:18.245586 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 00:43:18.253674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 00:43:18.261894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:18.280907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:18.293858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 00:43:18.305614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 00:43:18.329967 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 00:43:18.342713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:18.351177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 00:43:18.361715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 00:43:18.370865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 00:43:18.392926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 00:43:18.400843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 00:43:18.417444 dracut-cmdline[252]: dracut-dracut-053 Mar 12 00:43:18.430071 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ea6452ef6439cfe8258f05036d5ce6d908887775193e3b46c412fba933f4f4f3 Mar 12 00:43:18.419901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 00:43:18.455732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 00:43:18.478046 systemd-resolved[255]: Positive Trust Anchors: Mar 12 00:43:18.478060 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 00:43:18.478091 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 00:43:18.480381 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 12 00:43:18.481358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 00:43:18.488151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 00:43:18.574676 kernel: SCSI subsystem initialized Mar 12 00:43:18.582668 kernel: Loading iSCSI transport class v2.0-870. Mar 12 00:43:18.590674 kernel: iscsi: registered transport (tcp) Mar 12 00:43:18.606961 kernel: iscsi: registered transport (qla4xxx) Mar 12 00:43:18.606992 kernel: QLogic iSCSI HBA Driver Mar 12 00:43:18.645817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 00:43:18.665777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 00:43:18.693801 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 00:43:18.693863 kernel: device-mapper: uevent: version 1.0.3 Mar 12 00:43:18.698920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 00:43:18.746673 kernel: raid6: neonx8 gen() 15795 MB/s Mar 12 00:43:18.765664 kernel: raid6: neonx4 gen() 15704 MB/s Mar 12 00:43:18.784672 kernel: raid6: neonx2 gen() 13256 MB/s Mar 12 00:43:18.804665 kernel: raid6: neonx1 gen() 10489 MB/s Mar 12 00:43:18.823662 kernel: raid6: int64x8 gen() 6982 MB/s Mar 12 00:43:18.842662 kernel: raid6: int64x4 gen() 7360 MB/s Mar 12 00:43:18.862662 kernel: raid6: int64x2 gen() 6146 MB/s Mar 12 00:43:18.884549 kernel: raid6: int64x1 gen() 5072 MB/s Mar 12 00:43:18.884559 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Mar 12 00:43:18.907121 kernel: raid6: .... xor() 12052 MB/s, rmw enabled Mar 12 00:43:18.907140 kernel: raid6: using neon recovery algorithm Mar 12 00:43:18.916724 kernel: xor: measuring software checksum speed Mar 12 00:43:18.916744 kernel: 8regs : 19773 MB/sec Mar 12 00:43:18.920429 kernel: 32regs : 19679 MB/sec Mar 12 00:43:18.926224 kernel: arm64_neon : 26230 MB/sec Mar 12 00:43:18.926235 kernel: xor: using function: arm64_neon (26230 MB/sec) Mar 12 00:43:18.975670 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 00:43:18.985614 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 00:43:18.997794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 00:43:19.017588 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 12 00:43:19.021748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 00:43:19.036854 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 00:43:19.053581 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 12 00:43:19.084493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 00:43:19.096850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 00:43:19.133599 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 00:43:19.149897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 00:43:19.171548 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 00:43:19.181613 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 00:43:19.196856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 00:43:19.212948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 00:43:19.234886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 00:43:19.253687 kernel: hv_vmbus: Vmbus version:5.3 Mar 12 00:43:19.257618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 00:43:19.271577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 00:43:19.271750 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:19.297976 kernel: hv_vmbus: registering driver hv_netvsc Mar 12 00:43:19.294884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:19.320471 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 12 00:43:19.320492 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 12 00:43:19.320508 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 12 00:43:19.309802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:19.333359 kernel: hv_vmbus: registering driver hid_hyperv Mar 12 00:43:19.310040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.352927 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 12 00:43:19.352950 kernel: PTP clock support registered Mar 12 00:43:19.325365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.298967 kernel: hv_utils: Registering HyperV Utility Driver Mar 12 00:43:19.304500 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 12 00:43:19.304515 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 12 00:43:19.304633 kernel: hv_vmbus: registering driver hv_utils Mar 12 00:43:19.304642 kernel: hv_utils: Heartbeat IC version 3.0 Mar 12 00:43:19.304650 kernel: hv_utils: Shutdown IC version 3.2 Mar 12 00:43:19.304659 kernel: hv_utils: TimeSync IC version 4.0 Mar 12 00:43:19.304666 kernel: hv_vmbus: registering driver hv_storvsc Mar 12 00:43:19.304674 kernel: scsi host0: storvsc_host_t Mar 12 00:43:19.304777 kernel: scsi host1: storvsc_host_t Mar 12 00:43:19.304868 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 12 00:43:19.304888 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: VF slot 1 added Mar 12 00:43:19.304973 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 12 00:43:19.304989 systemd-journald[217]: Time jumped backwards, rotating. Mar 12 00:43:19.261512 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 12 00:43:19.262633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.324853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:19.347339 kernel: hv_vmbus: registering driver hv_pci Mar 12 00:43:19.347359 kernel: hv_pci a423e0bb-a430-4026-91eb-a7057e8c768e: PCI VMBus probing: Using version 0x10004 Mar 12 00:43:19.347529 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 12 00:43:19.324969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.360949 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 00:43:19.361108 kernel: hv_pci a423e0bb-a430-4026-91eb-a7057e8c768e: PCI host bridge to bus a430:00 Mar 12 00:43:19.346678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:19.373439 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 12 00:43:19.378206 kernel: pci_bus a430:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 12 00:43:19.378313 kernel: pci_bus a430:00: No busn resource found for root bus, will use [bus 00-ff] Mar 12 00:43:19.396968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 00:43:19.397187 kernel: pci a430:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 12 00:43:19.406679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:19.436360 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 12 00:43:19.444468 kernel: pci a430:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 00:43:19.444494 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 12 00:43:19.444587 kernel: pci a430:00:02.0: enabling Extended Tags Mar 12 00:43:19.444602 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 12 00:43:19.434246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 00:43:19.468175 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 12 00:43:19.468337 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 12 00:43:19.468435 kernel: pci a430:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a430:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 12 00:43:19.476398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:19.476447 kernel: pci_bus a430:00: busn_res: [bus 00-ff] end is updated to 00 Mar 12 00:43:19.480843 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 12 00:43:19.481008 kernel: pci a430:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 12 00:43:19.489424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 00:43:19.495279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:19.541371 kernel: mlx5_core a430:00:02.0: enabling device (0000 -> 0002) Mar 12 00:43:19.547384 kernel: mlx5_core a430:00:02.0: firmware version: 16.30.5026 Mar 12 00:43:19.742691 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: VF registering: eth1 Mar 12 00:43:19.742884 kernel: mlx5_core a430:00:02.0 eth1: joined to eth0 Mar 12 00:43:19.748029 kernel: mlx5_core a430:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 12 00:43:19.758402 kernel: mlx5_core a430:00:02.0 enP42032s1: renamed from eth1 Mar 12 00:43:20.061397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 12 00:43:20.085397 kernel: BTRFS: device fsid b3e5ce46-39d1-4da5-be73-65a819b9939b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (504) Mar 12 00:43:20.097170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 12 00:43:20.112406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 12 00:43:20.117690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 12 00:43:20.143559 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Mar 12 00:43:20.145600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 00:43:20.169922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 00:43:20.179789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:20.189401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:20.198397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:21.199483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 12 00:43:21.199532 disk-uuid[605]: The operation has completed successfully. Mar 12 00:43:21.261768 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 00:43:21.261858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 00:43:21.299542 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 00:43:21.309558 sh[718]: Success Mar 12 00:43:21.349405 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 12 00:43:21.627888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 00:43:21.638530 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 00:43:21.653847 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 00:43:21.686260 kernel: BTRFS info (device dm-0): first mount of filesystem b3e5ce46-39d1-4da5-be73-65a819b9939b Mar 12 00:43:21.686289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:21.686299 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 00:43:21.686308 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 00:43:21.686316 kernel: BTRFS info (device dm-0): using free space tree Mar 12 00:43:21.965100 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 00:43:21.969226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 00:43:21.986670 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 00:43:21.992553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 00:43:22.028392 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:22.028448 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:22.032212 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:22.068964 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:22.081841 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 00:43:22.086449 kernel: BTRFS info (device sda6): last unmount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:22.093806 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 00:43:22.111896 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 00:43:22.122492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 00:43:22.140537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 00:43:22.164832 systemd-networkd[906]: lo: Link UP Mar 12 00:43:22.164841 systemd-networkd[906]: lo: Gained carrier Mar 12 00:43:22.166469 systemd-networkd[906]: Enumeration completed Mar 12 00:43:22.167722 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 00:43:22.168154 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:22.168158 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 00:43:22.172736 systemd[1]: Reached target network.target - Network. Mar 12 00:43:22.251401 kernel: mlx5_core a430:00:02.0 enP42032s1: Link up Mar 12 00:43:22.289397 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: Data path switched to VF: enP42032s1 Mar 12 00:43:22.289677 systemd-networkd[906]: enP42032s1: Link UP Mar 12 00:43:22.289768 systemd-networkd[906]: eth0: Link UP Mar 12 00:43:22.289864 systemd-networkd[906]: eth0: Gained carrier Mar 12 00:43:22.289873 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:22.308924 systemd-networkd[906]: enP42032s1: Gained carrier Mar 12 00:43:22.319409 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 00:43:23.136945 ignition[891]: Ignition 2.19.0 Mar 12 00:43:23.136955 ignition[891]: Stage: fetch-offline Mar 12 00:43:23.140039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 00:43:23.136992 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.137000 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.137095 ignition[891]: parsed url from cmdline: "" Mar 12 00:43:23.137098 ignition[891]: no config URL provided Mar 12 00:43:23.137103 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 00:43:23.137109 ignition[891]: no config at "/usr/lib/ignition/user.ign" Mar 12 00:43:23.165617 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 00:43:23.137114 ignition[891]: failed to fetch config: resource requires networking Mar 12 00:43:23.137281 ignition[891]: Ignition finished successfully Mar 12 00:43:23.183317 ignition[915]: Ignition 2.19.0 Mar 12 00:43:23.183324 ignition[915]: Stage: fetch Mar 12 00:43:23.183512 ignition[915]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.183520 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.183610 ignition[915]: parsed url from cmdline: "" Mar 12 00:43:23.183613 ignition[915]: no config URL provided Mar 12 00:43:23.183618 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 00:43:23.183625 ignition[915]: no config at "/usr/lib/ignition/user.ign" Mar 12 00:43:23.183646 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 12 00:43:23.279215 ignition[915]: GET result: OK Mar 12 00:43:23.279299 ignition[915]: config has been read from IMDS userdata Mar 12 00:43:23.279342 ignition[915]: parsing config with SHA512: 7a5bdb39ef8c127730b47d365706322b57369c559770441002ad1c9ce21218709fca455995234aabcc894eff697da6a32804bfbcf6c2885883750c72c914414a Mar 12 00:43:23.283045 unknown[915]: fetched base config from "system" Mar 12 00:43:23.283742 ignition[915]: fetch: fetch complete Mar 12 00:43:23.283052 unknown[915]: fetched base config from "system" Mar 12 00:43:23.283747 ignition[915]: fetch: fetch passed Mar 12 00:43:23.283057 unknown[915]: fetched user config from "azure" Mar 12 00:43:23.283804 ignition[915]: Ignition finished successfully Mar 12 00:43:23.285522 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 00:43:23.299578 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 00:43:23.323818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 00:43:23.319796 ignition[921]: Ignition 2.19.0 Mar 12 00:43:23.319802 ignition[921]: Stage: kargs Mar 12 00:43:23.320029 ignition[921]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.320038 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.340520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 00:43:23.321318 ignition[921]: kargs: kargs passed Mar 12 00:43:23.321370 ignition[921]: Ignition finished successfully Mar 12 00:43:23.363312 ignition[927]: Ignition 2.19.0 Mar 12 00:43:23.363321 ignition[927]: Stage: disks Mar 12 00:43:23.367878 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 00:43:23.363507 ignition[927]: no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:23.373893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 00:43:23.363517 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:23.383168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 00:43:23.364362 ignition[927]: disks: disks passed Mar 12 00:43:23.392546 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 00:43:23.364418 ignition[927]: Ignition finished successfully Mar 12 00:43:23.402292 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 00:43:23.412247 systemd[1]: Reached target basic.target - Basic System. Mar 12 00:43:23.431604 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 00:43:23.503051 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 12 00:43:23.511661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 00:43:23.524541 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 00:43:23.576639 kernel: EXT4-fs (sda9): mounted filesystem c38155a0-efb1-4bf7-a848-c3aca111ae6d r/w with ordered data mode. Quota mode: none. Mar 12 00:43:23.578036 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 00:43:23.584984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 00:43:23.623444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 00:43:23.641389 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Mar 12 00:43:23.652066 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:23.652084 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:23.656252 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:23.663390 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:23.663477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 00:43:23.671529 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 12 00:43:23.678675 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 00:43:23.678711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 00:43:23.689664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 00:43:23.703084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 00:43:23.720607 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 00:43:24.061471 systemd-networkd[906]: eth0: Gained IPv6LL Mar 12 00:43:24.434507 coreos-metadata[963]: Mar 12 00:43:24.434 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 00:43:24.442315 coreos-metadata[963]: Mar 12 00:43:24.442 INFO Fetch successful Mar 12 00:43:24.442315 coreos-metadata[963]: Mar 12 00:43:24.442 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 12 00:43:24.456754 coreos-metadata[963]: Mar 12 00:43:24.456 INFO Fetch successful Mar 12 00:43:24.472427 coreos-metadata[963]: Mar 12 00:43:24.471 INFO wrote hostname ci-4081.3.6-n-d10d02cd33 to /sysroot/etc/hostname Mar 12 00:43:24.480566 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 00:43:24.679812 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 00:43:24.716225 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Mar 12 00:43:24.724054 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 00:43:24.731317 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 00:43:25.742000 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 00:43:25.759808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 00:43:25.775386 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 00:43:25.782789 kernel: BTRFS info (device sda6): last unmount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:25.785446 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 00:43:25.805344 ignition[1065]: INFO : Ignition 2.19.0 Mar 12 00:43:25.810064 ignition[1065]: INFO : Stage: mount Mar 12 00:43:25.812927 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:25.812927 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:25.812927 ignition[1065]: INFO : mount: mount passed Mar 12 00:43:25.812927 ignition[1065]: INFO : Ignition finished successfully Mar 12 00:43:25.811139 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 00:43:25.817805 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 00:43:25.838495 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 00:43:25.855884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 00:43:25.873405 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 12 00:43:25.884029 kernel: BTRFS info (device sda6): first mount of filesystem eac5c0c3-39ea-4f12-a58e-49b363adea2b Mar 12 00:43:25.884067 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 12 00:43:25.887259 kernel: BTRFS info (device sda6): using free space tree Mar 12 00:43:25.896399 kernel: BTRFS info (device sda6): auto enabling async discard Mar 12 00:43:25.895723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 00:43:25.921779 ignition[1093]: INFO : Ignition 2.19.0 Mar 12 00:43:25.921779 ignition[1093]: INFO : Stage: files Mar 12 00:43:25.927841 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:25.927841 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:25.927841 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Mar 12 00:43:25.958484 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 00:43:25.958484 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 00:43:26.045173 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 00:43:26.050876 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 00:43:26.050876 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 00:43:26.045586 unknown[1093]: wrote ssh authorized keys file for user: core Mar 12 00:43:26.077579 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 00:43:26.086311 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 12 00:43:26.179963 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 00:43:26.426500 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 12 00:43:26.426500 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:26.442611 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 12 00:43:27.060231 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 00:43:27.548369 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 12 00:43:27.548369 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 00:43:27.578252 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 00:43:27.587731 ignition[1093]: INFO : files: files passed Mar 12 00:43:27.587731 ignition[1093]: INFO : Ignition finished successfully Mar 12 00:43:27.588289 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 00:43:27.617621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 00:43:27.631538 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 00:43:27.647157 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 00:43:27.648399 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 00:43:27.681279 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.689431 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.689431 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 00:43:27.683347 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 00:43:27.694040 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 00:43:27.717655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 00:43:27.750136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 00:43:27.750260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 00:43:27.760506 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 00:43:27.770915 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 00:43:27.779304 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 00:43:27.790857 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 00:43:27.809665 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 00:43:27.821646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 00:43:27.839230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 00:43:27.844336 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 00:43:27.853884 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 00:43:27.862209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 00:43:27.862337 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 00:43:27.874843 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 00:43:27.884304 systemd[1]: Stopped target basic.target - Basic System. Mar 12 00:43:27.892198 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 00:43:27.900528 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 00:43:27.909712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 00:43:27.919188 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 00:43:27.928012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 00:43:27.937564 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 00:43:27.947310 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 00:43:27.955495 systemd[1]: Stopped target swap.target - Swaps. Mar 12 00:43:27.963001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 00:43:27.963167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 00:43:27.975178 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 00:43:27.984132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 00:43:27.993895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 00:43:27.998285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 00:43:28.003565 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 00:43:28.003724 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 00:43:28.017430 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 00:43:28.017592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 00:43:28.026932 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 00:43:28.027086 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 00:43:28.035347 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 12 00:43:28.035498 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 12 00:43:28.062471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 00:43:28.078031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 00:43:28.092669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 00:43:28.104763 ignition[1146]: INFO : Ignition 2.19.0 Mar 12 00:43:28.104763 ignition[1146]: INFO : Stage: umount Mar 12 00:43:28.104763 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 00:43:28.104763 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 12 00:43:28.104763 ignition[1146]: INFO : umount: umount passed Mar 12 00:43:28.104763 ignition[1146]: INFO : Ignition finished successfully Mar 12 00:43:28.092828 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 00:43:28.099918 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 00:43:28.100028 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 00:43:28.118640 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 00:43:28.118750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 00:43:28.132628 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 00:43:28.132904 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 00:43:28.140920 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 00:43:28.140973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 00:43:28.149317 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 00:43:28.149361 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 00:43:28.158601 systemd[1]: Stopped target network.target - Network. Mar 12 00:43:28.166722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 00:43:28.166772 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 00:43:28.175832 systemd[1]: Stopped target paths.target - Path Units. Mar 12 00:43:28.183527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 00:43:28.189403 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 00:43:28.196599 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 00:43:28.204701 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 00:43:28.213181 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 00:43:28.213231 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 00:43:28.220866 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 00:43:28.220903 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 00:43:28.232660 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 00:43:28.232715 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 00:43:28.240567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 00:43:28.240607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 00:43:28.249242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 00:43:28.258203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 00:43:28.266603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 00:43:28.267153 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 00:43:28.267244 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 00:43:28.269687 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 12 00:43:28.276452 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 00:43:28.276557 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 00:43:28.288143 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 00:43:28.288237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 00:43:28.299435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 00:43:28.299492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 00:43:28.315512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 00:43:28.464789 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: Data path switched from VF: enP42032s1 Mar 12 00:43:28.323129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 00:43:28.323213 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 00:43:28.333354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 00:43:28.333417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 00:43:28.341768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 00:43:28.341814 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 00:43:28.349912 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 00:43:28.349951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 00:43:28.359145 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 00:43:28.384922 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 00:43:28.385065 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 00:43:28.405211 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 00:43:28.405277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 00:43:28.413833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 00:43:28.413876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 00:43:28.422574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 00:43:28.422626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 00:43:28.435648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 00:43:28.435703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 00:43:28.459277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 00:43:28.459364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 00:43:28.480635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 00:43:28.494201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 00:43:28.494283 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 00:43:28.499800 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 00:43:28.499850 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 00:43:28.509354 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 00:43:28.509401 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 00:43:28.525439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:28.525510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:28.536213 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 00:43:28.536312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 00:43:28.544998 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 00:43:28.545075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 00:43:28.556801 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 00:43:28.556882 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 00:43:28.566176 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 00:43:28.574652 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 00:43:28.574754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 00:43:28.596563 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 00:43:28.678229 systemd[1]: Switching root. Mar 12 00:43:28.773978 systemd-journald[217]: Journal stopped Mar 12 00:43:33.643417 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 12 00:43:33.643450 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 00:43:33.643461 kernel: SELinux: policy capability open_perms=1 Mar 12 00:43:33.643472 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 00:43:33.643480 kernel: SELinux: policy capability always_check_network=0 Mar 12 00:43:33.643488 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 00:43:33.643496 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 00:43:33.643505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 00:43:33.643513 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 00:43:33.643521 systemd[1]: Successfully loaded SELinux policy in 202.856ms. Mar 12 00:43:33.643532 kernel: audit: type=1403 audit(1773276210.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 00:43:33.643541 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.270ms. Mar 12 00:43:33.643550 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 00:43:33.643560 systemd[1]: Detected virtualization microsoft. Mar 12 00:43:33.643569 systemd[1]: Detected architecture arm64. Mar 12 00:43:33.643579 systemd[1]: Detected first boot. Mar 12 00:43:33.643589 systemd[1]: Hostname set to . Mar 12 00:43:33.643598 systemd[1]: Initializing machine ID from random generator. Mar 12 00:43:33.643607 zram_generator::config[1188]: No configuration found. Mar 12 00:43:33.643616 systemd[1]: Populated /etc with preset unit settings. Mar 12 00:43:33.643626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 00:43:33.643636 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 00:43:33.643647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 00:43:33.643656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 00:43:33.643665 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 00:43:33.643675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 00:43:33.643684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 00:43:33.643693 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 00:43:33.643704 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 00:43:33.643714 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 00:43:33.643723 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 00:43:33.643732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 00:43:33.643741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 00:43:33.643750 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 00:43:33.643759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 00:43:33.643769 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 00:43:33.643778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 00:43:33.643789 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 12 00:43:33.643798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 00:43:33.643807 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 00:43:33.643818 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 00:43:33.643828 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 00:43:33.643838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 00:43:33.643848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 00:43:33.643859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 00:43:33.643868 systemd[1]: Reached target slices.target - Slice Units. Mar 12 00:43:33.643877 systemd[1]: Reached target swap.target - Swaps. Mar 12 00:43:33.643887 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 00:43:33.643896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 00:43:33.643906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 00:43:33.643915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 00:43:33.643926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 00:43:33.643936 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 00:43:33.643945 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 00:43:33.643955 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 00:43:33.643964 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 00:43:33.643973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 00:43:33.643984 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 00:43:33.643994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 00:43:33.644003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 00:43:33.644013 systemd[1]: Reached target machines.target - Containers. Mar 12 00:43:33.644023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 00:43:33.644033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 00:43:33.644043 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 00:43:33.644053 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 00:43:33.644064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 00:43:33.644074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 00:43:33.644083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 00:43:33.644093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 00:43:33.644102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 00:43:33.644112 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 00:43:33.644122 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 00:43:33.644132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 00:43:33.644141 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 00:43:33.644152 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 00:43:33.644162 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 00:43:33.644171 kernel: ACPI: bus type drm_connector registered Mar 12 00:43:33.644179 kernel: fuse: init (API version 7.39) Mar 12 00:43:33.644188 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 00:43:33.644198 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 00:43:33.644207 kernel: loop: module loaded Mar 12 00:43:33.644216 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 00:43:33.644248 systemd-journald[1281]: Collecting audit messages is disabled. Mar 12 00:43:33.644270 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 00:43:33.644281 systemd-journald[1281]: Journal started Mar 12 00:43:33.644303 systemd-journald[1281]: Runtime Journal (/run/log/journal/0d1dd72255a04204ad2948e0d208124d) is 8.0M, max 78.5M, 70.5M free. Mar 12 00:43:32.787214 systemd[1]: Queued start job for default target multi-user.target. Mar 12 00:43:32.904022 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 12 00:43:32.904368 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 00:43:32.904686 systemd[1]: systemd-journald.service: Consumed 2.520s CPU time. Mar 12 00:43:33.661096 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 00:43:33.661150 systemd[1]: Stopped verity-setup.service. Mar 12 00:43:33.674817 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 00:43:33.675535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 00:43:33.679908 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 00:43:33.684643 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 00:43:33.688741 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 00:43:33.693738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 00:43:33.699030 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 00:43:33.703679 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 00:43:33.711412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 00:43:33.717325 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 00:43:33.717474 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 00:43:33.723353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 00:43:33.723513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 00:43:33.728641 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 00:43:33.728770 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 00:43:33.733398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 00:43:33.733524 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 00:43:33.738991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 00:43:33.739111 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 00:43:33.743900 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 00:43:33.744024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 00:43:33.751013 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 00:43:33.756222 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 00:43:33.761817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 00:43:33.767301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 00:43:33.783938 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 00:43:33.799483 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 00:43:33.805355 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 00:43:33.810144 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 00:43:33.810178 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 00:43:33.815445 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 00:43:33.821742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 00:43:33.827576 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 00:43:33.831983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 00:43:33.856776 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 00:43:33.862185 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 00:43:33.867080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 00:43:33.870558 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 00:43:33.877278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 00:43:33.880545 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 00:43:33.890037 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 00:43:33.897618 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 00:43:33.909062 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 00:43:33.918046 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 00:43:33.926793 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 00:43:33.929278 systemd-journald[1281]: Time spent on flushing to /var/log/journal/0d1dd72255a04204ad2948e0d208124d is 60.924ms for 898 entries. Mar 12 00:43:33.929278 systemd-journald[1281]: System Journal (/var/log/journal/0d1dd72255a04204ad2948e0d208124d) is 11.8M, max 2.6G, 2.6G free. Mar 12 00:43:34.081299 systemd-journald[1281]: Received client request to flush runtime journal. Mar 12 00:43:34.081363 systemd-journald[1281]: /var/log/journal/0d1dd72255a04204ad2948e0d208124d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 12 00:43:34.081415 systemd-journald[1281]: Rotating system journal. Mar 12 00:43:34.081439 kernel: loop0: detected capacity change from 0 to 31320 Mar 12 00:43:33.937358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 00:43:33.943969 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 00:43:33.955322 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 00:43:33.978890 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 00:43:33.990574 udevadm[1325]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 00:43:34.083183 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 00:43:34.099753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 00:43:34.133567 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 00:43:34.135266 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 00:43:34.150805 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 12 00:43:34.150820 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 12 00:43:34.156684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 00:43:34.168229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 00:43:34.237192 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 00:43:34.246561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 00:43:34.263841 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 12 00:43:34.264420 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 12 00:43:34.268529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 00:43:34.654409 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 00:43:34.691513 kernel: loop1: detected capacity change from 0 to 209336 Mar 12 00:43:34.769319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 00:43:34.783427 kernel: loop2: detected capacity change from 0 to 114432 Mar 12 00:43:34.785531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 00:43:34.810602 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Mar 12 00:43:35.026287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 00:43:35.041755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 00:43:35.090855 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 12 00:43:35.100705 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 00:43:35.149407 kernel: loop3: detected capacity change from 0 to 114328 Mar 12 00:43:35.168137 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 00:43:35.199397 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 00:43:35.225901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#126 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 12 00:43:35.237967 kernel: hv_vmbus: registering driver hv_balloon Mar 12 00:43:35.238047 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 12 00:43:35.241290 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 12 00:43:35.258689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:35.280050 systemd-networkd[1360]: lo: Link UP Mar 12 00:43:35.280352 systemd-networkd[1360]: lo: Gained carrier Mar 12 00:43:35.282279 systemd-networkd[1360]: Enumeration completed Mar 12 00:43:35.282550 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 00:43:35.287994 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:35.288073 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 00:43:35.298390 kernel: hv_vmbus: registering driver hyperv_fb Mar 12 00:43:35.307342 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 12 00:43:35.307429 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 12 00:43:35.309533 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 00:43:35.319405 kernel: Console: switching to colour dummy device 80x25 Mar 12 00:43:35.323404 kernel: Console: switching to colour frame buffer device 128x48 Mar 12 00:43:35.335996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 00:43:35.336186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:35.353396 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1354) Mar 12 00:43:35.357901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 00:43:35.358393 kernel: mlx5_core a430:00:02.0 enP42032s1: Link up Mar 12 00:43:35.389477 kernel: hv_netvsc 000d3ac5-70e2-000d-3ac5-70e2000d3ac5 eth0: Data path switched to VF: enP42032s1 Mar 12 00:43:35.389931 systemd-networkd[1360]: enP42032s1: Link UP Mar 12 00:43:35.390167 systemd-networkd[1360]: eth0: Link UP Mar 12 00:43:35.391761 systemd-networkd[1360]: eth0: Gained carrier Mar 12 00:43:35.391894 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:35.396932 systemd-networkd[1360]: enP42032s1: Gained carrier Mar 12 00:43:35.410790 systemd-networkd[1360]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 00:43:35.415832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 12 00:43:35.428549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 00:43:35.504668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 00:43:35.584443 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 00:43:35.595533 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 00:43:35.625584 kernel: loop4: detected capacity change from 0 to 31320 Mar 12 00:43:35.638397 kernel: loop5: detected capacity change from 0 to 209336 Mar 12 00:43:35.660395 kernel: loop6: detected capacity change from 0 to 114432 Mar 12 00:43:35.663634 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 00:43:35.675913 kernel: loop7: detected capacity change from 0 to 114328 Mar 12 00:43:35.675998 kernel: I/O error, dev loop7, sector 0 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 2 Mar 12 00:43:35.686145 (sd-merge)[1448]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 12 00:43:35.686585 (sd-merge)[1448]: Merged extensions into '/usr'. Mar 12 00:43:35.692581 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 00:43:35.698332 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 00:43:35.698345 systemd[1]: Reloading... Mar 12 00:43:35.763525 zram_generator::config[1475]: No configuration found. Mar 12 00:43:35.899843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:43:35.971403 systemd[1]: Reloading finished in 272 ms. Mar 12 00:43:35.999210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 00:43:36.005557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 00:43:36.013269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 00:43:36.023580 systemd[1]: Starting ensure-sysext.service... Mar 12 00:43:36.029595 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 00:43:36.037561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 00:43:36.038978 lvm[1537]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 00:43:36.048897 systemd[1]: Reloading requested from client PID 1536 ('systemctl') (unit ensure-sysext.service)... Mar 12 00:43:36.048909 systemd[1]: Reloading... Mar 12 00:43:36.066492 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 00:43:36.068315 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 00:43:36.069319 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 00:43:36.069571 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Mar 12 00:43:36.069625 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Mar 12 00:43:36.072630 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 00:43:36.072681 systemd-tmpfiles[1538]: Skipping /boot Mar 12 00:43:36.085245 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 00:43:36.085257 systemd-tmpfiles[1538]: Skipping /boot Mar 12 00:43:36.131402 zram_generator::config[1581]: No configuration found. Mar 12 00:43:36.221934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:43:36.295744 systemd[1]: Reloading finished in 246 ms. Mar 12 00:43:36.309526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 00:43:36.324248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 00:43:36.341574 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 00:43:36.370955 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 00:43:36.378614 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 00:43:36.390466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 00:43:36.407675 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 00:43:36.417067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 00:43:36.421663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 00:43:36.434274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 00:43:36.446615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 00:43:36.453420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 00:43:36.454638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 00:43:36.454786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 00:43:36.463895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 00:43:36.464838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 00:43:36.471977 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 00:43:36.472284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 00:43:36.480991 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 00:43:36.488564 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 00:43:36.502506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 00:43:36.507157 systemd-resolved[1637]: Positive Trust Anchors: Mar 12 00:43:36.507174 systemd-resolved[1637]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 00:43:36.507206 systemd-resolved[1637]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 00:43:36.508592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 00:43:36.514892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 00:43:36.522898 augenrules[1656]: No rules Mar 12 00:43:36.531536 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 00:43:36.538409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 00:43:36.546284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 00:43:36.546560 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 00:43:36.552157 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 00:43:36.557388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 00:43:36.557602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 00:43:36.562955 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 00:43:36.563173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 00:43:36.565557 systemd-resolved[1637]: Using system hostname 'ci-4081.3.6-n-d10d02cd33'. Mar 12 00:43:36.568578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 00:43:36.568791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 00:43:36.574422 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 00:43:36.574629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 00:43:36.582540 systemd[1]: Finished ensure-sysext.service. Mar 12 00:43:36.588779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 00:43:36.588945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 00:43:36.608115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 00:43:36.613148 systemd[1]: Reached target network.target - Network. Mar 12 00:43:36.617243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 00:43:37.138666 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 00:43:37.144489 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 00:43:37.373520 systemd-networkd[1360]: eth0: Gained IPv6LL Mar 12 00:43:37.376296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 00:43:37.382578 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 00:43:40.434727 ldconfig[1317]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 00:43:40.450063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 00:43:40.459559 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 00:43:40.472741 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 00:43:40.478111 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 00:43:40.482845 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 00:43:40.487933 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 00:43:40.493677 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 00:43:40.498354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 00:43:40.503870 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 00:43:40.509955 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 00:43:40.509990 systemd[1]: Reached target paths.target - Path Units. Mar 12 00:43:40.514281 systemd[1]: Reached target timers.target - Timer Units. Mar 12 00:43:40.522422 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 00:43:40.528674 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 00:43:40.539235 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 00:43:40.544157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 00:43:40.548634 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 00:43:40.552572 systemd[1]: Reached target basic.target - Basic System. Mar 12 00:43:40.556655 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 00:43:40.556681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 00:43:40.567487 systemd[1]: Starting chronyd.service - NTP client/server... Mar 12 00:43:40.574506 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 00:43:40.585539 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 00:43:40.591622 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 12 00:43:40.594672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 00:43:40.599949 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 00:43:40.607558 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 00:43:40.610967 jq[1684]: false Mar 12 00:43:40.613327 chronyd[1687]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 12 00:43:40.614418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 00:43:40.614461 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 12 00:43:40.624915 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 12 00:43:40.626912 KVP[1688]: KVP starting; pid is:1688 Mar 12 00:43:40.629409 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 12 00:43:40.630500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:43:40.638449 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 00:43:40.646565 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 00:43:40.652045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 00:43:40.662565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 00:43:40.669132 chronyd[1687]: Timezone right/UTC failed leap second check, ignoring Mar 12 00:43:40.669325 chronyd[1687]: Loaded seccomp filter (level 2) Mar 12 00:43:40.672772 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 00:43:40.682615 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 00:43:40.689017 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 00:43:40.689905 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 00:43:40.691046 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 00:43:40.703951 extend-filesystems[1685]: Found loop4 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found loop5 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found loop6 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found loop7 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda1 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda2 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda3 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found usr Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda4 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda6 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda7 Mar 12 00:43:40.703951 extend-filesystems[1685]: Found sda9 Mar 12 00:43:40.891760 kernel: hv_utils: KVP IC version 4.0 Mar 12 00:43:40.699566 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 00:43:40.891841 update_engine[1706]: I20260312 00:43:40.775074 1706 main.cc:92] Flatcar Update Engine starting Mar 12 00:43:40.892059 extend-filesystems[1685]: Checking size of /dev/sda9 Mar 12 00:43:40.892059 extend-filesystems[1685]: Old size kept for /dev/sda9 Mar 12 00:43:40.892059 extend-filesystems[1685]: Found sr0 Mar 12 00:43:40.704904 KVP[1688]: KVP LIC Version: 3.1 Mar 12 00:43:40.708706 systemd[1]: Started chronyd.service - NTP client/server. Mar 12 00:43:40.967980 update_engine[1706]: I20260312 00:43:40.906502 1706 update_check_scheduler.cc:74] Next update check in 4m24s Mar 12 00:43:40.884903 dbus-daemon[1681]: [system] SELinux support is enabled Mar 12 00:43:41.041206 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1729) Mar 12 00:43:40.732745 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 00:43:41.041298 jq[1708]: true Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetch successful Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetch successful Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetching http://168.63.129.16/machine/9c3f3dd2-1c1d-4a28-9edd-3a664fb1cfe4/cc7d616f%2D318e%2D45dc%2D8db5%2D8682f72b682b.%5Fci%2D4081.3.6%2Dn%2Dd10d02cd33?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetch successful Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.001 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 12 00:43:41.042400 coreos-metadata[1680]: Mar 12 00:43:41.013 INFO Fetch successful Mar 12 00:43:40.937621 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 00:43:40.734427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 00:43:40.738221 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 00:43:40.739678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 00:43:41.069802 tar[1714]: linux-arm64/LICENSE Mar 12 00:43:41.069802 tar[1714]: linux-arm64/helm Mar 12 00:43:40.762771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 00:43:41.070064 jq[1718]: true Mar 12 00:43:40.762957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 00:43:40.802624 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 00:43:40.802793 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 00:43:41.072157 bash[1763]: Updated "/home/core/.ssh/authorized_keys" Mar 12 00:43:40.803088 (ntainerd)[1719]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 00:43:40.815553 systemd-logind[1701]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 00:43:40.816521 systemd-logind[1701]: New seat seat0. Mar 12 00:43:40.828143 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 00:43:40.846892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 00:43:40.885692 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 00:43:40.902023 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 00:43:40.902057 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 00:43:40.920098 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 00:43:40.920118 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 00:43:40.937224 systemd[1]: Started update-engine.service - Update Engine. Mar 12 00:43:41.008891 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 00:43:41.118601 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 00:43:41.167442 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 00:43:41.187065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 00:43:41.189084 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 00:43:41.352530 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 00:43:41.354882 locksmithd[1762]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 00:43:41.374728 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 00:43:41.392459 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 00:43:41.403127 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 12 00:43:41.414204 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 00:43:41.415668 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 00:43:41.429698 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 00:43:41.466858 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 00:43:41.480817 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 00:43:41.487605 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 12 00:43:41.493057 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 00:43:41.503873 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 12 00:43:41.625600 containerd[1719]: time="2026-03-12T00:43:41.625256820Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 00:43:41.658918 containerd[1719]: time="2026-03-12T00:43:41.658871620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.661183 containerd[1719]: time="2026-03-12T00:43:41.661149220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662118140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662150580Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662314460Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662333020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662413620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662426460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662586100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662601820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662614340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662624820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662695220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663195 containerd[1719]: time="2026-03-12T00:43:41.662880420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663470 containerd[1719]: time="2026-03-12T00:43:41.662972500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 00:43:41.663470 containerd[1719]: time="2026-03-12T00:43:41.662985300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 00:43:41.663470 containerd[1719]: time="2026-03-12T00:43:41.663055100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 00:43:41.663470 containerd[1719]: time="2026-03-12T00:43:41.663094580Z" level=info msg="metadata content store policy set" policy=shared Mar 12 00:43:41.691557 containerd[1719]: time="2026-03-12T00:43:41.691512460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 00:43:41.692605 containerd[1719]: time="2026-03-12T00:43:41.692580660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 00:43:41.692725 containerd[1719]: time="2026-03-12T00:43:41.692713340Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 00:43:41.693043 containerd[1719]: time="2026-03-12T00:43:41.692793100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 00:43:41.693043 containerd[1719]: time="2026-03-12T00:43:41.692828980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 00:43:41.693043 containerd[1719]: time="2026-03-12T00:43:41.692993500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 00:43:41.693351 containerd[1719]: time="2026-03-12T00:43:41.693333940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 00:43:41.694150 containerd[1719]: time="2026-03-12T00:43:41.694129500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 00:43:41.694234 containerd[1719]: time="2026-03-12T00:43:41.694221260Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 00:43:41.694310 containerd[1719]: time="2026-03-12T00:43:41.694297940Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 00:43:41.694388 containerd[1719]: time="2026-03-12T00:43:41.694366700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695352180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695384860Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695401980Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695419100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695433220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695446980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695460020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695482660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695497380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695512420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695525900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695538100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695551100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.695776 containerd[1719]: time="2026-03-12T00:43:41.695572900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695588460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695600860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695616060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695630620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695643060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695655220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695672100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695694980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695707100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.696078 containerd[1719]: time="2026-03-12T00:43:41.695719700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 00:43:41.696325 containerd[1719]: time="2026-03-12T00:43:41.696274420Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 00:43:41.696325 containerd[1719]: time="2026-03-12T00:43:41.696304220Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698093780Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698128540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698139820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698154380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698167820Z" level=info msg="NRI interface is disabled by configuration." Mar 12 00:43:41.700251 containerd[1719]: time="2026-03-12T00:43:41.698179340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 00:43:41.700426 containerd[1719]: time="2026-03-12T00:43:41.698487940Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 00:43:41.700426 containerd[1719]: time="2026-03-12T00:43:41.698548060Z" level=info msg="Connect containerd service" Mar 12 00:43:41.700426 containerd[1719]: time="2026-03-12T00:43:41.698578940Z" level=info msg="using legacy CRI server" Mar 12 00:43:41.700426 containerd[1719]: time="2026-03-12T00:43:41.698585540Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 00:43:41.700426 containerd[1719]: time="2026-03-12T00:43:41.698672700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 00:43:41.702504 containerd[1719]: time="2026-03-12T00:43:41.702480020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 00:43:41.704135 containerd[1719]: time="2026-03-12T00:43:41.703989980Z" level=info msg="Start subscribing containerd event" Mar 12 00:43:41.704135 containerd[1719]: time="2026-03-12T00:43:41.704054780Z" level=info msg="Start recovering state" Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.705028620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.707522060Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.707419580Z" level=info msg="Start event monitor" Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.707552580Z" level=info msg="Start snapshots syncer" Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.707565500Z" level=info msg="Start cni network conf syncer for default" Mar 12 00:43:41.708391 containerd[1719]: time="2026-03-12T00:43:41.707572460Z" level=info msg="Start streaming server" Mar 12 00:43:41.709124 containerd[1719]: time="2026-03-12T00:43:41.709107900Z" level=info msg="containerd successfully booted in 0.084607s" Mar 12 00:43:41.709214 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 00:43:41.766426 tar[1714]: linux-arm64/README.md Mar 12 00:43:41.778199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 00:43:41.846593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:43:41.852594 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 00:43:41.852617 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 00:43:41.864464 systemd[1]: Startup finished in 614ms (kernel) + 12.291s (initrd) + 11.968s (userspace) = 24.874s. Mar 12 00:43:42.106594 login[1823]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 12 00:43:42.109083 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:43:42.118472 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 00:43:42.124037 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 00:43:42.127363 systemd-logind[1701]: New session 2 of user core. Mar 12 00:43:42.154567 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 00:43:42.163690 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 00:43:42.166661 (systemd)[1851]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 00:43:42.289897 systemd[1851]: Queued start job for default target default.target. Mar 12 00:43:42.297261 systemd[1851]: Created slice app.slice - User Application Slice. Mar 12 00:43:42.297290 systemd[1851]: Reached target paths.target - Paths. Mar 12 00:43:42.297302 systemd[1851]: Reached target timers.target - Timers. Mar 12 00:43:42.299580 systemd[1851]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 00:43:42.306530 kubelet[1840]: E0312 00:43:42.306489 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 00:43:42.309983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 00:43:42.310131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 00:43:42.314579 systemd[1851]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 00:43:42.314641 systemd[1851]: Reached target sockets.target - Sockets. Mar 12 00:43:42.314653 systemd[1851]: Reached target basic.target - Basic System. Mar 12 00:43:42.314692 systemd[1851]: Reached target default.target - Main User Target. Mar 12 00:43:42.314718 systemd[1851]: Startup finished in 138ms. Mar 12 00:43:42.315051 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 00:43:42.320591 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 00:43:43.106967 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:43:43.110764 systemd-logind[1701]: New session 1 of user core. Mar 12 00:43:43.116504 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 00:43:43.270429 waagent[1826]: 2026-03-12T00:43:43.270311Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 12 00:43:43.275005 waagent[1826]: 2026-03-12T00:43:43.274942Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 12 00:43:43.278649 waagent[1826]: 2026-03-12T00:43:43.278601Z INFO Daemon Daemon Python: 3.11.9 Mar 12 00:43:43.284284 waagent[1826]: 2026-03-12T00:43:43.282198Z INFO Daemon Daemon Run daemon Mar 12 00:43:43.285535 waagent[1826]: 2026-03-12T00:43:43.285493Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 12 00:43:43.292572 waagent[1826]: 2026-03-12T00:43:43.292513Z INFO Daemon Daemon Using waagent for provisioning Mar 12 00:43:43.296832 waagent[1826]: 2026-03-12T00:43:43.296787Z INFO Daemon Daemon Activate resource disk Mar 12 00:43:43.300576 waagent[1826]: 2026-03-12T00:43:43.300534Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 12 00:43:43.309717 waagent[1826]: 2026-03-12T00:43:43.309666Z INFO Daemon Daemon Found device: None Mar 12 00:43:43.313125 waagent[1826]: 2026-03-12T00:43:43.313086Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 12 00:43:43.319837 waagent[1826]: 2026-03-12T00:43:43.319797Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 12 00:43:43.329918 waagent[1826]: 2026-03-12T00:43:43.329866Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 00:43:43.334435 waagent[1826]: 2026-03-12T00:43:43.334392Z INFO Daemon Daemon Running default provisioning handler Mar 12 00:43:43.345476 waagent[1826]: 2026-03-12T00:43:43.345419Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 12 00:43:43.356179 waagent[1826]: 2026-03-12T00:43:43.356120Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 12 00:43:43.363516 waagent[1826]: 2026-03-12T00:43:43.363437Z INFO Daemon Daemon cloud-init is enabled: False Mar 12 00:43:43.367446 waagent[1826]: 2026-03-12T00:43:43.367408Z INFO Daemon Daemon Copying ovf-env.xml Mar 12 00:43:43.477804 waagent[1826]: 2026-03-12T00:43:43.476906Z INFO Daemon Daemon Successfully mounted dvd Mar 12 00:43:43.517830 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 12 00:43:43.521864 waagent[1826]: 2026-03-12T00:43:43.521788Z INFO Daemon Daemon Detect protocol endpoint Mar 12 00:43:43.525915 waagent[1826]: 2026-03-12T00:43:43.525866Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 12 00:43:43.530581 waagent[1826]: 2026-03-12T00:43:43.530535Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 12 00:43:43.535826 waagent[1826]: 2026-03-12T00:43:43.535786Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 12 00:43:43.540177 waagent[1826]: 2026-03-12T00:43:43.540135Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 12 00:43:43.544890 waagent[1826]: 2026-03-12T00:43:43.544844Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 12 00:43:43.609857 waagent[1826]: 2026-03-12T00:43:43.609807Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 12 00:43:43.615136 waagent[1826]: 2026-03-12T00:43:43.615077Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 12 00:43:43.619134 waagent[1826]: 2026-03-12T00:43:43.619097Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 12 00:43:43.890457 waagent[1826]: 2026-03-12T00:43:43.890182Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 12 00:43:43.895367 waagent[1826]: 2026-03-12T00:43:43.895312Z INFO Daemon Daemon Forcing an update of the goal state. Mar 12 00:43:43.902598 waagent[1826]: 2026-03-12T00:43:43.902549Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 00:43:43.921986 waagent[1826]: 2026-03-12T00:43:43.921942Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 12 00:43:43.926698 waagent[1826]: 2026-03-12T00:43:43.926653Z INFO Daemon Mar 12 00:43:43.928840 waagent[1826]: 2026-03-12T00:43:43.928797Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1b440a99-c242-4865-8615-b1a8e693472f eTag: 10600741933898852667 source: Fabric] Mar 12 00:43:43.937656 waagent[1826]: 2026-03-12T00:43:43.937611Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 12 00:43:43.942847 waagent[1826]: 2026-03-12T00:43:43.942802Z INFO Daemon Mar 12 00:43:43.945078 waagent[1826]: 2026-03-12T00:43:43.945034Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 12 00:43:43.953718 waagent[1826]: 2026-03-12T00:43:43.953682Z INFO Daemon Daemon Downloading artifacts profile blob Mar 12 00:43:44.023989 waagent[1826]: 2026-03-12T00:43:44.023904Z INFO Daemon Downloaded certificate {'thumbprint': '9BC1B557B5DA9B7D014C45563CB9A4D3400719BE', 'hasPrivateKey': True} Mar 12 00:43:44.031814 waagent[1826]: 2026-03-12T00:43:44.031762Z INFO Daemon Fetch goal state completed Mar 12 00:43:44.041543 waagent[1826]: 2026-03-12T00:43:44.041503Z INFO Daemon Daemon Starting provisioning Mar 12 00:43:44.045428 waagent[1826]: 2026-03-12T00:43:44.045363Z INFO Daemon Daemon Handle ovf-env.xml. Mar 12 00:43:44.049106 waagent[1826]: 2026-03-12T00:43:44.049066Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-d10d02cd33] Mar 12 00:43:44.055369 waagent[1826]: 2026-03-12T00:43:44.055314Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-d10d02cd33] Mar 12 00:43:44.060441 waagent[1826]: 2026-03-12T00:43:44.060388Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 12 00:43:44.066280 waagent[1826]: 2026-03-12T00:43:44.066231Z INFO Daemon Daemon Primary interface is [eth0] Mar 12 00:43:44.117241 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 00:43:44.117248 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 00:43:44.117274 systemd-networkd[1360]: eth0: DHCP lease lost Mar 12 00:43:44.118526 waagent[1826]: 2026-03-12T00:43:44.118437Z INFO Daemon Daemon Create user account if not exists Mar 12 00:43:44.122721 systemd-networkd[1360]: eth0: DHCPv6 lease lost Mar 12 00:43:44.123418 waagent[1826]: 2026-03-12T00:43:44.123352Z INFO Daemon Daemon User core already exists, skip useradd Mar 12 00:43:44.128149 waagent[1826]: 2026-03-12T00:43:44.128108Z INFO Daemon Daemon Configure sudoer Mar 12 00:43:44.131966 waagent[1826]: 2026-03-12T00:43:44.131921Z INFO Daemon Daemon Configure sshd Mar 12 00:43:44.136638 waagent[1826]: 2026-03-12T00:43:44.136591Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 12 00:43:44.146421 waagent[1826]: 2026-03-12T00:43:44.146329Z INFO Daemon Daemon Deploy ssh public key. Mar 12 00:43:44.154486 systemd-networkd[1360]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 12 00:43:45.229899 waagent[1826]: 2026-03-12T00:43:45.229852Z INFO Daemon Daemon Provisioning complete Mar 12 00:43:45.245143 waagent[1826]: 2026-03-12T00:43:45.245095Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 12 00:43:45.250261 waagent[1826]: 2026-03-12T00:43:45.250216Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 12 00:43:45.257697 waagent[1826]: 2026-03-12T00:43:45.257655Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 12 00:43:45.386271 waagent[1903]: 2026-03-12T00:43:45.385604Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 12 00:43:45.386271 waagent[1903]: 2026-03-12T00:43:45.385754Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 12 00:43:45.386271 waagent[1903]: 2026-03-12T00:43:45.385808Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 12 00:43:45.438619 waagent[1903]: 2026-03-12T00:43:45.438538Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 12 00:43:45.438941 waagent[1903]: 2026-03-12T00:43:45.438903Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 00:43:45.439157 waagent[1903]: 2026-03-12T00:43:45.439118Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 00:43:45.446815 waagent[1903]: 2026-03-12T00:43:45.446752Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 12 00:43:45.452297 waagent[1903]: 2026-03-12T00:43:45.452252Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 12 00:43:45.452931 waagent[1903]: 2026-03-12T00:43:45.452892Z INFO ExtHandler Mar 12 00:43:45.453096 waagent[1903]: 2026-03-12T00:43:45.453061Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b2b6007b-4ba5-46ba-88b0-f1c06891bdb6 eTag: 10600741933898852667 source: Fabric] Mar 12 00:43:45.453508 waagent[1903]: 2026-03-12T00:43:45.453466Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 12 00:43:45.454184 waagent[1903]: 2026-03-12T00:43:45.454139Z INFO ExtHandler Mar 12 00:43:45.454924 waagent[1903]: 2026-03-12T00:43:45.454292Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 12 00:43:45.458392 waagent[1903]: 2026-03-12T00:43:45.457317Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 12 00:43:45.527261 waagent[1903]: 2026-03-12T00:43:45.527129Z INFO ExtHandler Downloaded certificate {'thumbprint': '9BC1B557B5DA9B7D014C45563CB9A4D3400719BE', 'hasPrivateKey': True} Mar 12 00:43:45.527762 waagent[1903]: 2026-03-12T00:43:45.527717Z INFO ExtHandler Fetch goal state completed Mar 12 00:43:45.541533 waagent[1903]: 2026-03-12T00:43:45.541483Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1903 Mar 12 00:43:45.541680 waagent[1903]: 2026-03-12T00:43:45.541648Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 12 00:43:45.543252 waagent[1903]: 2026-03-12T00:43:45.543209Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 12 00:43:45.543634 waagent[1903]: 2026-03-12T00:43:45.543596Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 12 00:43:45.576178 waagent[1903]: 2026-03-12T00:43:45.575749Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 12 00:43:45.576178 waagent[1903]: 2026-03-12T00:43:45.575960Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 12 00:43:45.581935 waagent[1903]: 2026-03-12T00:43:45.581899Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 12 00:43:45.587998 systemd[1]: Reloading requested from client PID 1916 ('systemctl') (unit waagent.service)... Mar 12 00:43:45.588223 systemd[1]: Reloading... Mar 12 00:43:45.655577 zram_generator::config[1946]: No configuration found. Mar 12 00:43:45.755138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:43:45.835015 systemd[1]: Reloading finished in 246 ms. Mar 12 00:43:45.853981 waagent[1903]: 2026-03-12T00:43:45.853611Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 12 00:43:45.859772 systemd[1]: Reloading requested from client PID 2004 ('systemctl') (unit waagent.service)... Mar 12 00:43:45.859788 systemd[1]: Reloading... Mar 12 00:43:45.938574 zram_generator::config[2041]: No configuration found. Mar 12 00:43:46.040625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:43:46.115888 systemd[1]: Reloading finished in 255 ms. Mar 12 00:43:46.143190 waagent[1903]: 2026-03-12T00:43:46.140483Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 12 00:43:46.143190 waagent[1903]: 2026-03-12T00:43:46.140653Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 12 00:43:46.934360 waagent[1903]: 2026-03-12T00:43:46.934195Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 12 00:43:46.937782 waagent[1903]: 2026-03-12T00:43:46.937720Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 12 00:43:46.938652 waagent[1903]: 2026-03-12T00:43:46.938582Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 12 00:43:46.939047 waagent[1903]: 2026-03-12T00:43:46.938951Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 12 00:43:46.939183 waagent[1903]: 2026-03-12T00:43:46.939014Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 00:43:46.939537 waagent[1903]: 2026-03-12T00:43:46.939438Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 12 00:43:46.939609 waagent[1903]: 2026-03-12T00:43:46.939531Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 12 00:43:46.939784 waagent[1903]: 2026-03-12T00:43:46.939724Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 12 00:43:46.940206 waagent[1903]: 2026-03-12T00:43:46.940102Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 12 00:43:46.940327 waagent[1903]: 2026-03-12T00:43:46.940213Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 12 00:43:46.940553 waagent[1903]: 2026-03-12T00:43:46.940481Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 00:43:46.941009 waagent[1903]: 2026-03-12T00:43:46.940871Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 12 00:43:46.941754 waagent[1903]: 2026-03-12T00:43:46.941220Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 12 00:43:46.942138 waagent[1903]: 2026-03-12T00:43:46.942091Z INFO EnvHandler ExtHandler Configure routes Mar 12 00:43:46.942664 waagent[1903]: 2026-03-12T00:43:46.942609Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 12 00:43:46.942964 waagent[1903]: 2026-03-12T00:43:46.942914Z INFO EnvHandler ExtHandler Gateway:None Mar 12 00:43:46.944461 waagent[1903]: 2026-03-12T00:43:46.944396Z INFO EnvHandler ExtHandler Routes:None Mar 12 00:43:46.944926 waagent[1903]: 2026-03-12T00:43:46.944883Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 12 00:43:46.944926 waagent[1903]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 12 00:43:46.944926 waagent[1903]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 12 00:43:46.944926 waagent[1903]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 12 00:43:46.944926 waagent[1903]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 12 00:43:46.944926 waagent[1903]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 00:43:46.944926 waagent[1903]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 12 00:43:46.947077 waagent[1903]: 2026-03-12T00:43:46.947039Z INFO ExtHandler ExtHandler Mar 12 00:43:46.947594 waagent[1903]: 2026-03-12T00:43:46.947550Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 29c2dbd1-d53f-4ef7-8c41-411279f89075 correlation 93dcfb61-c313-4743-b07e-395575e20950 created: 2026-03-12T00:42:45.790031Z] Mar 12 00:43:46.948130 waagent[1903]: 2026-03-12T00:43:46.948081Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 12 00:43:46.951399 waagent[1903]: 2026-03-12T00:43:46.950948Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 12 00:43:46.986941 waagent[1903]: 2026-03-12T00:43:46.986455Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EF77C187-80CF-4566-8A2D-C1DA101C749C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 12 00:43:47.024260 waagent[1903]: 2026-03-12T00:43:47.024179Z INFO MonitorHandler ExtHandler Network interfaces: Mar 12 00:43:47.024260 waagent[1903]: Executing ['ip', '-a', '-o', 'link']: Mar 12 00:43:47.024260 waagent[1903]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 12 00:43:47.024260 waagent[1903]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:70:e2 brd ff:ff:ff:ff:ff:ff Mar 12 00:43:47.024260 waagent[1903]: 3: enP42032s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:70:e2 brd ff:ff:ff:ff:ff:ff\ altname enP42032p0s2 Mar 12 00:43:47.024260 waagent[1903]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 12 00:43:47.024260 waagent[1903]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 12 00:43:47.024260 waagent[1903]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 12 00:43:47.024260 waagent[1903]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 12 00:43:47.024260 waagent[1903]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 12 00:43:47.024260 waagent[1903]: 2: eth0 inet6 fe80::20d:3aff:fec5:70e2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 12 00:43:47.105459 waagent[1903]: 2026-03-12T00:43:47.105391Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 12 00:43:47.105459 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.105459 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.105459 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.105459 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.105459 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.105459 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.105459 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 00:43:47.105459 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 00:43:47.105459 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 00:43:47.108667 waagent[1903]: 2026-03-12T00:43:47.108602Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 12 00:43:47.108667 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.108667 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.108667 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.108667 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.108667 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 12 00:43:47.108667 waagent[1903]: pkts bytes target prot opt in out source destination Mar 12 00:43:47.108667 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 12 00:43:47.108667 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 12 00:43:47.108667 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 12 00:43:47.108915 waagent[1903]: 2026-03-12T00:43:47.108879Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 12 00:43:52.448295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 00:43:52.456985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:43:52.557797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:43:52.561797 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 00:43:52.625734 kubelet[2131]: E0312 00:43:52.625692 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 00:43:52.629417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 00:43:52.629560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 00:44:01.699897 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 00:44:01.708612 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:54596.service - OpenSSH per-connection server daemon (10.200.16.10:54596). Mar 12 00:44:02.235399 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 54596 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:02.236281 sshd[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:02.240894 systemd-logind[1701]: New session 3 of user core. Mar 12 00:44:02.247746 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 00:44:02.664335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 00:44:02.666422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:02.671118 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:54600.service - OpenSSH per-connection server daemon (10.200.16.10:54600). Mar 12 00:44:02.841085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:02.851616 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 00:44:02.887706 kubelet[2154]: E0312 00:44:02.887611 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 00:44:02.890259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 00:44:02.890398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 00:44:03.156062 sshd[2145]: Accepted publickey for core from 10.200.16.10 port 54600 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:03.157738 sshd[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:03.162239 systemd-logind[1701]: New session 4 of user core. Mar 12 00:44:03.168540 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 00:44:03.507681 sshd[2145]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:03.510941 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:54600.service: Deactivated successfully. Mar 12 00:44:03.512911 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 00:44:03.513614 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Mar 12 00:44:03.514399 systemd-logind[1701]: Removed session 4. Mar 12 00:44:03.594760 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:54614.service - OpenSSH per-connection server daemon (10.200.16.10:54614). Mar 12 00:44:04.081309 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 54614 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:04.082645 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:04.086163 systemd-logind[1701]: New session 5 of user core. Mar 12 00:44:04.098505 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 00:44:04.427917 sshd[2166]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:04.431764 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:54614.service: Deactivated successfully. Mar 12 00:44:04.433410 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 00:44:04.434182 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Mar 12 00:44:04.435002 systemd-logind[1701]: Removed session 5. Mar 12 00:44:04.463106 chronyd[1687]: Selected source PHC0 Mar 12 00:44:04.515133 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:54622.service - OpenSSH per-connection server daemon (10.200.16.10:54622). Mar 12 00:44:05.002400 sshd[2173]: Accepted publickey for core from 10.200.16.10 port 54622 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:05.003200 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:05.006647 systemd-logind[1701]: New session 6 of user core. Mar 12 00:44:05.014527 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 00:44:05.354863 sshd[2173]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:05.358224 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:54622.service: Deactivated successfully. Mar 12 00:44:05.359745 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 00:44:05.360329 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Mar 12 00:44:05.361051 systemd-logind[1701]: Removed session 6. Mar 12 00:44:05.432697 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:54632.service - OpenSSH per-connection server daemon (10.200.16.10:54632). Mar 12 00:44:05.902399 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 54632 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:05.903222 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:05.906815 systemd-logind[1701]: New session 7 of user core. Mar 12 00:44:05.914495 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 00:44:06.287494 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 00:44:06.287777 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 00:44:06.317484 sudo[2183]: pam_unix(sudo:session): session closed for user root Mar 12 00:44:06.391350 sshd[2180]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:06.394974 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:54632.service: Deactivated successfully. Mar 12 00:44:06.397132 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 00:44:06.399729 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Mar 12 00:44:06.400959 systemd-logind[1701]: Removed session 7. Mar 12 00:44:06.483205 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:54638.service - OpenSSH per-connection server daemon (10.200.16.10:54638). Mar 12 00:44:06.970402 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 54638 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:06.971558 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:06.975856 systemd-logind[1701]: New session 8 of user core. Mar 12 00:44:06.980585 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 00:44:07.244466 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 00:44:07.244753 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 00:44:07.247839 sudo[2192]: pam_unix(sudo:session): session closed for user root Mar 12 00:44:07.252207 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 00:44:07.252755 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 00:44:07.265659 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 00:44:07.266700 auditctl[2195]: No rules Mar 12 00:44:07.267119 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 00:44:07.267568 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 00:44:07.270023 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 00:44:07.292795 augenrules[2213]: No rules Mar 12 00:44:07.294190 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 00:44:07.295323 sudo[2191]: pam_unix(sudo:session): session closed for user root Mar 12 00:44:07.374595 sshd[2188]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:07.377216 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Mar 12 00:44:07.378795 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:54638.service: Deactivated successfully. Mar 12 00:44:07.380594 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 00:44:07.381748 systemd-logind[1701]: Removed session 8. Mar 12 00:44:07.453255 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:54652.service - OpenSSH per-connection server daemon (10.200.16.10:54652). Mar 12 00:44:07.920402 sshd[2221]: Accepted publickey for core from 10.200.16.10 port 54652 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:44:07.921201 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:44:07.924846 systemd-logind[1701]: New session 9 of user core. Mar 12 00:44:07.932498 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 00:44:08.181696 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 00:44:08.181978 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 00:44:09.244712 (dockerd)[2239]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 00:44:09.244805 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 00:44:09.881563 dockerd[2239]: time="2026-03-12T00:44:09.881508333Z" level=info msg="Starting up" Mar 12 00:44:10.295441 dockerd[2239]: time="2026-03-12T00:44:10.295398453Z" level=info msg="Loading containers: start." Mar 12 00:44:10.483415 kernel: Initializing XFRM netlink socket Mar 12 00:44:10.678345 systemd-networkd[1360]: docker0: Link UP Mar 12 00:44:10.702884 dockerd[2239]: time="2026-03-12T00:44:10.702336533Z" level=info msg="Loading containers: done." Mar 12 00:44:10.712189 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2785617033-merged.mount: Deactivated successfully. Mar 12 00:44:10.722925 dockerd[2239]: time="2026-03-12T00:44:10.722825213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 00:44:10.723224 dockerd[2239]: time="2026-03-12T00:44:10.723101813Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 00:44:10.723524 dockerd[2239]: time="2026-03-12T00:44:10.723307973Z" level=info msg="Daemon has completed initialization" Mar 12 00:44:10.780341 dockerd[2239]: time="2026-03-12T00:44:10.780276893Z" level=info msg="API listen on /run/docker.sock" Mar 12 00:44:10.780891 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 00:44:11.272993 containerd[1719]: time="2026-03-12T00:44:11.272945653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 00:44:12.122943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653698353.mount: Deactivated successfully. Mar 12 00:44:12.948284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 00:44:12.956207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:13.050818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:13.055104 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 00:44:13.131319 kubelet[2436]: E0312 00:44:13.131266 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 00:44:13.134243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 00:44:13.134476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 00:44:14.036409 containerd[1719]: time="2026-03-12T00:44:14.036021353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:14.041470 containerd[1719]: time="2026-03-12T00:44:14.041234524Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 12 00:44:14.044687 containerd[1719]: time="2026-03-12T00:44:14.044641651Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:14.052845 containerd[1719]: time="2026-03-12T00:44:14.051543825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:14.052845 containerd[1719]: time="2026-03-12T00:44:14.052565947Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.779579134s" Mar 12 00:44:14.052845 containerd[1719]: time="2026-03-12T00:44:14.052598747Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 12 00:44:14.053752 containerd[1719]: time="2026-03-12T00:44:14.053726110Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 00:44:15.879745 containerd[1719]: time="2026-03-12T00:44:15.879690838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:15.882357 containerd[1719]: time="2026-03-12T00:44:15.882137643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 12 00:44:15.887014 containerd[1719]: time="2026-03-12T00:44:15.885709490Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:15.890305 containerd[1719]: time="2026-03-12T00:44:15.890274300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:15.891500 containerd[1719]: time="2026-03-12T00:44:15.891469662Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.837711472s" Mar 12 00:44:15.891559 containerd[1719]: time="2026-03-12T00:44:15.891501942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 12 00:44:15.891968 containerd[1719]: time="2026-03-12T00:44:15.891917703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 00:44:17.599201 containerd[1719]: time="2026-03-12T00:44:17.599147344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:17.601348 containerd[1719]: time="2026-03-12T00:44:17.601136108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 12 00:44:17.604392 containerd[1719]: time="2026-03-12T00:44:17.603612153Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:17.608636 containerd[1719]: time="2026-03-12T00:44:17.608271603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:17.609321 containerd[1719]: time="2026-03-12T00:44:17.609291525Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.717343822s" Mar 12 00:44:17.609364 containerd[1719]: time="2026-03-12T00:44:17.609323485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 12 00:44:17.609910 containerd[1719]: time="2026-03-12T00:44:17.609889166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 00:44:18.857873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532504179.mount: Deactivated successfully. Mar 12 00:44:19.379637 containerd[1719]: time="2026-03-12T00:44:19.379591317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:19.381859 containerd[1719]: time="2026-03-12T00:44:19.381831959Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 12 00:44:19.384347 containerd[1719]: time="2026-03-12T00:44:19.384321801Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:19.387906 containerd[1719]: time="2026-03-12T00:44:19.387881404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:19.388545 containerd[1719]: time="2026-03-12T00:44:19.388425204Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.778505598s" Mar 12 00:44:19.388545 containerd[1719]: time="2026-03-12T00:44:19.388459564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 12 00:44:19.388877 containerd[1719]: time="2026-03-12T00:44:19.388844165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 00:44:20.034526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401606238.mount: Deactivated successfully. Mar 12 00:44:21.733961 containerd[1719]: time="2026-03-12T00:44:21.733910960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:21.736771 containerd[1719]: time="2026-03-12T00:44:21.736526682Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 12 00:44:21.739638 containerd[1719]: time="2026-03-12T00:44:21.739264884Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:21.746369 containerd[1719]: time="2026-03-12T00:44:21.745664249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:21.747641 containerd[1719]: time="2026-03-12T00:44:21.747610650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.358731045s" Mar 12 00:44:21.747738 containerd[1719]: time="2026-03-12T00:44:21.747723690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 12 00:44:21.748168 containerd[1719]: time="2026-03-12T00:44:21.748149251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 00:44:22.342182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430401138.mount: Deactivated successfully. Mar 12 00:44:22.358406 containerd[1719]: time="2026-03-12T00:44:22.358170294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:22.361481 containerd[1719]: time="2026-03-12T00:44:22.361303176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 12 00:44:22.364979 containerd[1719]: time="2026-03-12T00:44:22.363719898Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:22.368389 containerd[1719]: time="2026-03-12T00:44:22.367491060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:22.368389 containerd[1719]: time="2026-03-12T00:44:22.368268941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 619.91985ms" Mar 12 00:44:22.368389 containerd[1719]: time="2026-03-12T00:44:22.368295581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 12 00:44:22.369192 containerd[1719]: time="2026-03-12T00:44:22.369165742Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 00:44:23.002895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725749931.mount: Deactivated successfully. Mar 12 00:44:23.199758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 00:44:23.205729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:23.345566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:23.354953 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 00:44:23.366398 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 12 00:44:23.430723 kubelet[2539]: E0312 00:44:23.430675 2539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 00:44:23.433469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 00:44:23.433619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 00:44:24.521406 containerd[1719]: time="2026-03-12T00:44:24.521325984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:24.526676 containerd[1719]: time="2026-03-12T00:44:24.526647068Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 12 00:44:24.529318 containerd[1719]: time="2026-03-12T00:44:24.529290910Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:24.535642 containerd[1719]: time="2026-03-12T00:44:24.535577155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:24.536678 containerd[1719]: time="2026-03-12T00:44:24.536648035Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.167446253s" Mar 12 00:44:24.536892 containerd[1719]: time="2026-03-12T00:44:24.536770195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 12 00:44:26.002402 update_engine[1706]: I20260312 00:44:26.001762 1706 update_attempter.cc:509] Updating boot flags... Mar 12 00:44:26.081420 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2631) Mar 12 00:44:29.115601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:29.126640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:29.155148 systemd[1]: Reloading requested from client PID 2665 ('systemctl') (unit session-9.scope)... Mar 12 00:44:29.155162 systemd[1]: Reloading... Mar 12 00:44:29.267407 zram_generator::config[2702]: No configuration found. Mar 12 00:44:29.351629 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:44:29.428977 systemd[1]: Reloading finished in 273 ms. Mar 12 00:44:29.473190 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 00:44:29.473267 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 00:44:29.473633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:29.479973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:29.822710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:29.832766 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 00:44:29.896208 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 00:44:29.896208 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 00:44:29.896208 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 00:44:29.896554 kubelet[2773]: I0312 00:44:29.896243 2773 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 00:44:30.806735 kubelet[2773]: I0312 00:44:30.806696 2773 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 00:44:30.806735 kubelet[2773]: I0312 00:44:30.806723 2773 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 00:44:30.806949 kubelet[2773]: I0312 00:44:30.806933 2773 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 00:44:30.827732 kubelet[2773]: E0312 00:44:30.827693 2773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 00:44:30.829056 kubelet[2773]: I0312 00:44:30.828712 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 00:44:30.836970 kubelet[2773]: E0312 00:44:30.836935 2773 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 00:44:30.836970 kubelet[2773]: I0312 00:44:30.836972 2773 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 00:44:30.840143 kubelet[2773]: I0312 00:44:30.840123 2773 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 00:44:30.842051 kubelet[2773]: I0312 00:44:30.842014 2773 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 00:44:30.842201 kubelet[2773]: I0312 00:44:30.842054 2773 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d10d02cd33","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 00:44:30.842297 kubelet[2773]: I0312 00:44:30.842205 2773 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 00:44:30.842297 kubelet[2773]: I0312 00:44:30.842214 2773 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 00:44:30.842338 kubelet[2773]: I0312 00:44:30.842331 2773 state_mem.go:36] "Initialized new in-memory state store" Mar 12 00:44:30.845186 kubelet[2773]: I0312 00:44:30.845168 2773 kubelet.go:480] "Attempting to sync node with API server" Mar 12 00:44:30.845249 kubelet[2773]: I0312 00:44:30.845190 2773 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 00:44:30.845249 kubelet[2773]: I0312 00:44:30.845214 2773 kubelet.go:386] "Adding apiserver pod source" Mar 12 00:44:30.846772 kubelet[2773]: I0312 00:44:30.846390 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 00:44:30.852275 kubelet[2773]: E0312 00:44:30.852247 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d10d02cd33&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 00:44:30.852734 kubelet[2773]: I0312 00:44:30.852715 2773 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 00:44:30.853388 kubelet[2773]: I0312 00:44:30.853357 2773 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 00:44:30.853518 kubelet[2773]: W0312 00:44:30.853507 2773 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 00:44:30.855156 kubelet[2773]: E0312 00:44:30.855041 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 00:44:30.856762 kubelet[2773]: I0312 00:44:30.856743 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 00:44:30.856823 kubelet[2773]: I0312 00:44:30.856784 2773 server.go:1289] "Started kubelet" Mar 12 00:44:30.859060 kubelet[2773]: I0312 00:44:30.856879 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 00:44:30.859060 kubelet[2773]: I0312 00:44:30.857623 2773 server.go:317] "Adding debug handlers to kubelet server" Mar 12 00:44:30.859060 kubelet[2773]: I0312 00:44:30.858565 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 00:44:30.859060 kubelet[2773]: I0312 00:44:30.858813 2773 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 00:44:30.860492 kubelet[2773]: E0312 00:44:30.859498 2773 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-d10d02cd33.189bf15f850be81d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-d10d02cd33,UID:ci-4081.3.6-n-d10d02cd33,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-d10d02cd33,},FirstTimestamp:2026-03-12 00:44:30.856759325 +0000 UTC m=+1.020746859,LastTimestamp:2026-03-12 00:44:30.856759325 +0000 UTC m=+1.020746859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-d10d02cd33,}" Mar 12 00:44:30.862586 kubelet[2773]: I0312 00:44:30.861723 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 00:44:30.862586 kubelet[2773]: I0312 00:44:30.861988 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 00:44:30.864473 kubelet[2773]: E0312 00:44:30.864453 2773 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 00:44:30.865218 kubelet[2773]: E0312 00:44:30.865193 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" Mar 12 00:44:30.865314 kubelet[2773]: I0312 00:44:30.865304 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 00:44:30.865606 kubelet[2773]: I0312 00:44:30.865590 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 00:44:30.865721 kubelet[2773]: I0312 00:44:30.865711 2773 reconciler.go:26] "Reconciler: start to sync state" Mar 12 00:44:30.867710 kubelet[2773]: I0312 00:44:30.867691 2773 factory.go:223] Registration of the systemd container factory successfully Mar 12 00:44:30.867881 kubelet[2773]: I0312 00:44:30.867864 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 00:44:30.868198 kubelet[2773]: E0312 00:44:30.868179 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 00:44:30.869030 kubelet[2773]: I0312 00:44:30.869012 2773 factory.go:223] Registration of the containerd container factory successfully Mar 12 00:44:30.875224 kubelet[2773]: E0312 00:44:30.875185 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d10d02cd33?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Mar 12 00:44:30.897748 kubelet[2773]: I0312 00:44:30.897567 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 00:44:30.901555 kubelet[2773]: I0312 00:44:30.901531 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 00:44:30.901555 kubelet[2773]: I0312 00:44:30.901559 2773 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 00:44:30.901643 kubelet[2773]: I0312 00:44:30.901577 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 00:44:30.901643 kubelet[2773]: I0312 00:44:30.901583 2773 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 00:44:30.901643 kubelet[2773]: E0312 00:44:30.901627 2773 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 00:44:30.905513 kubelet[2773]: E0312 00:44:30.905103 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 00:44:30.947610 kubelet[2773]: I0312 00:44:30.947514 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 00:44:30.947610 kubelet[2773]: I0312 00:44:30.947562 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 00:44:30.947610 kubelet[2773]: I0312 00:44:30.947582 2773 state_mem.go:36] "Initialized new in-memory state store" Mar 12 00:44:30.965698 kubelet[2773]: E0312 00:44:30.965673 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" Mar 12 00:44:30.972248 kubelet[2773]: I0312 00:44:30.972225 2773 policy_none.go:49] "None policy: Start" Mar 12 00:44:30.972480 kubelet[2773]: I0312 00:44:30.972358 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 00:44:30.972480 kubelet[2773]: I0312 00:44:30.972404 2773 state_mem.go:35] "Initializing new in-memory state store" Mar 12 00:44:30.984224 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 00:44:30.998232 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 00:44:31.001141 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 00:44:31.002051 kubelet[2773]: E0312 00:44:31.001903 2773 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 00:44:31.009143 kubelet[2773]: E0312 00:44:31.009115 2773 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 00:44:31.009310 kubelet[2773]: I0312 00:44:31.009296 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 00:44:31.009346 kubelet[2773]: I0312 00:44:31.009311 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 00:44:31.009735 kubelet[2773]: I0312 00:44:31.009590 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 00:44:31.011434 kubelet[2773]: E0312 00:44:31.010952 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 00:44:31.011434 kubelet[2773]: E0312 00:44:31.011012 2773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-d10d02cd33\" not found" Mar 12 00:44:31.075827 kubelet[2773]: E0312 00:44:31.075709 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d10d02cd33?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Mar 12 00:44:31.111413 kubelet[2773]: I0312 00:44:31.111019 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.111413 kubelet[2773]: E0312 00:44:31.111356 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.215459 systemd[1]: Created slice kubepods-burstable-podf599e0b80efa697b64292fc49d539189.slice - libcontainer container kubepods-burstable-podf599e0b80efa697b64292fc49d539189.slice. Mar 12 00:44:31.221030 kubelet[2773]: E0312 00:44:31.221005 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.225986 systemd[1]: Created slice kubepods-burstable-pode5ca196ba85fedbc953124feea33a907.slice - libcontainer container kubepods-burstable-pode5ca196ba85fedbc953124feea33a907.slice. Mar 12 00:44:31.237514 kubelet[2773]: E0312 00:44:31.237496 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.240020 systemd[1]: Created slice kubepods-burstable-pod2632cc1885bada80a77634d9a874c86d.slice - libcontainer container kubepods-burstable-pod2632cc1885bada80a77634d9a874c86d.slice. Mar 12 00:44:31.241481 kubelet[2773]: E0312 00:44:31.241463 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269195 kubelet[2773]: I0312 00:44:31.268900 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269195 kubelet[2773]: I0312 00:44:31.268932 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2632cc1885bada80a77634d9a874c86d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d10d02cd33\" (UID: \"2632cc1885bada80a77634d9a874c86d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269195 kubelet[2773]: I0312 00:44:31.268949 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269195 kubelet[2773]: I0312 00:44:31.268965 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269195 kubelet[2773]: I0312 00:44:31.268980 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269367 kubelet[2773]: I0312 00:44:31.269004 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269367 kubelet[2773]: I0312 00:44:31.269018 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269367 kubelet[2773]: I0312 00:44:31.269032 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.269367 kubelet[2773]: I0312 00:44:31.269049 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.313795 kubelet[2773]: I0312 00:44:31.313770 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.314115 kubelet[2773]: E0312 00:44:31.314093 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.476577 kubelet[2773]: E0312 00:44:31.476539 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d10d02cd33?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Mar 12 00:44:31.522359 containerd[1719]: time="2026-03-12T00:44:31.522321820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d10d02cd33,Uid:f599e0b80efa697b64292fc49d539189,Namespace:kube-system,Attempt:0,}" Mar 12 00:44:31.539327 containerd[1719]: time="2026-03-12T00:44:31.539295472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d10d02cd33,Uid:e5ca196ba85fedbc953124feea33a907,Namespace:kube-system,Attempt:0,}" Mar 12 00:44:31.543247 containerd[1719]: time="2026-03-12T00:44:31.543215114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d10d02cd33,Uid:2632cc1885bada80a77634d9a874c86d,Namespace:kube-system,Attempt:0,}" Mar 12 00:44:31.716512 kubelet[2773]: I0312 00:44:31.716483 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.717044 kubelet[2773]: E0312 00:44:31.717019 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:31.807047 kubelet[2773]: E0312 00:44:31.806947 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 00:44:32.152183 kubelet[2773]: E0312 00:44:32.152071 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 00:44:32.277507 kubelet[2773]: E0312 00:44:32.277467 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d10d02cd33?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Mar 12 00:44:32.305922 kubelet[2773]: E0312 00:44:32.305897 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d10d02cd33&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 00:44:32.357086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725519093.mount: Deactivated successfully. Mar 12 00:44:32.382239 containerd[1719]: time="2026-03-12T00:44:32.381448888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 00:44:32.384284 containerd[1719]: time="2026-03-12T00:44:32.384161210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 12 00:44:32.387931 containerd[1719]: time="2026-03-12T00:44:32.387202732Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 00:44:32.394388 containerd[1719]: time="2026-03-12T00:44:32.394333617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 00:44:32.396854 containerd[1719]: time="2026-03-12T00:44:32.396814978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 00:44:32.400073 containerd[1719]: time="2026-03-12T00:44:32.400038581Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 00:44:32.402358 containerd[1719]: time="2026-03-12T00:44:32.402236062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 00:44:32.406326 containerd[1719]: time="2026-03-12T00:44:32.406282225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 00:44:32.407400 containerd[1719]: time="2026-03-12T00:44:32.407165105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 867.803833ms" Mar 12 00:44:32.409469 containerd[1719]: time="2026-03-12T00:44:32.409439587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 866.162033ms" Mar 12 00:44:32.409993 containerd[1719]: time="2026-03-12T00:44:32.409971667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 887.550887ms" Mar 12 00:44:32.485793 kubelet[2773]: E0312 00:44:32.482936 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 00:44:32.559927 kubelet[2773]: I0312 00:44:32.559557 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:32.559927 kubelet[2773]: E0312 00:44:32.559886 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:32.842402 kubelet[2773]: E0312 00:44:32.842345 2773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 00:44:33.116679 containerd[1719]: time="2026-03-12T00:44:33.115919150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:44:33.116679 containerd[1719]: time="2026-03-12T00:44:33.115994070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:44:33.116679 containerd[1719]: time="2026-03-12T00:44:33.116006630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.117320 containerd[1719]: time="2026-03-12T00:44:33.117015671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.118997 containerd[1719]: time="2026-03-12T00:44:33.118689552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:44:33.118997 containerd[1719]: time="2026-03-12T00:44:33.118732112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:44:33.118997 containerd[1719]: time="2026-03-12T00:44:33.118746792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.118997 containerd[1719]: time="2026-03-12T00:44:33.118822512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.120582 containerd[1719]: time="2026-03-12T00:44:33.119590393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:44:33.120582 containerd[1719]: time="2026-03-12T00:44:33.119626233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:44:33.120582 containerd[1719]: time="2026-03-12T00:44:33.119636353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.120582 containerd[1719]: time="2026-03-12T00:44:33.119693473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:33.137582 systemd[1]: Started cri-containerd-327984b49624d06aeb3fa232939ea24431fd2a948660fb6973ce45e4fbef7946.scope - libcontainer container 327984b49624d06aeb3fa232939ea24431fd2a948660fb6973ce45e4fbef7946. Mar 12 00:44:33.150518 systemd[1]: Started cri-containerd-35fe684a1c407475f6ca28d99bfa68942b1f8331cfef939fdf1af906092a9edd.scope - libcontainer container 35fe684a1c407475f6ca28d99bfa68942b1f8331cfef939fdf1af906092a9edd. Mar 12 00:44:33.152458 systemd[1]: Started cri-containerd-bedc9b77c514a9f4bb183d99e882d0f2aabba5c6d169a0bb47ab61c5bb595325.scope - libcontainer container bedc9b77c514a9f4bb183d99e882d0f2aabba5c6d169a0bb47ab61c5bb595325. Mar 12 00:44:33.192643 containerd[1719]: time="2026-03-12T00:44:33.192209602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d10d02cd33,Uid:e5ca196ba85fedbc953124feea33a907,Namespace:kube-system,Attempt:0,} returns sandbox id \"327984b49624d06aeb3fa232939ea24431fd2a948660fb6973ce45e4fbef7946\"" Mar 12 00:44:33.193486 containerd[1719]: time="2026-03-12T00:44:33.193458283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d10d02cd33,Uid:f599e0b80efa697b64292fc49d539189,Namespace:kube-system,Attempt:0,} returns sandbox id \"bedc9b77c514a9f4bb183d99e882d0f2aabba5c6d169a0bb47ab61c5bb595325\"" Mar 12 00:44:33.204871 containerd[1719]: time="2026-03-12T00:44:33.204829571Z" level=info msg="CreateContainer within sandbox \"327984b49624d06aeb3fa232939ea24431fd2a948660fb6973ce45e4fbef7946\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 00:44:33.208105 containerd[1719]: time="2026-03-12T00:44:33.208078653Z" level=info msg="CreateContainer within sandbox \"bedc9b77c514a9f4bb183d99e882d0f2aabba5c6d169a0bb47ab61c5bb595325\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 00:44:33.209961 containerd[1719]: time="2026-03-12T00:44:33.209733494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d10d02cd33,Uid:2632cc1885bada80a77634d9a874c86d,Namespace:kube-system,Attempt:0,} returns sandbox id \"35fe684a1c407475f6ca28d99bfa68942b1f8331cfef939fdf1af906092a9edd\"" Mar 12 00:44:33.221608 containerd[1719]: time="2026-03-12T00:44:33.221570823Z" level=info msg="CreateContainer within sandbox \"35fe684a1c407475f6ca28d99bfa68942b1f8331cfef939fdf1af906092a9edd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 00:44:33.255840 containerd[1719]: time="2026-03-12T00:44:33.255799526Z" level=info msg="CreateContainer within sandbox \"bedc9b77c514a9f4bb183d99e882d0f2aabba5c6d169a0bb47ab61c5bb595325\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d81b53cc4ac4ef3efef1f5d9b03d242cce18dc7cf693e6f8cba567a3622274f\"" Mar 12 00:44:33.256434 containerd[1719]: time="2026-03-12T00:44:33.256408726Z" level=info msg="StartContainer for \"4d81b53cc4ac4ef3efef1f5d9b03d242cce18dc7cf693e6f8cba567a3622274f\"" Mar 12 00:44:33.276697 containerd[1719]: time="2026-03-12T00:44:33.276608660Z" level=info msg="CreateContainer within sandbox \"327984b49624d06aeb3fa232939ea24431fd2a948660fb6973ce45e4fbef7946\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1882e1ab5743ecce96bf0818340e64e75a501cb8e74bb72847057c5853174b7f\"" Mar 12 00:44:33.277153 containerd[1719]: time="2026-03-12T00:44:33.277125661Z" level=info msg="StartContainer for \"1882e1ab5743ecce96bf0818340e64e75a501cb8e74bb72847057c5853174b7f\"" Mar 12 00:44:33.278572 systemd[1]: Started cri-containerd-4d81b53cc4ac4ef3efef1f5d9b03d242cce18dc7cf693e6f8cba567a3622274f.scope - libcontainer container 4d81b53cc4ac4ef3efef1f5d9b03d242cce18dc7cf693e6f8cba567a3622274f. Mar 12 00:44:33.285011 containerd[1719]: time="2026-03-12T00:44:33.284972506Z" level=info msg="CreateContainer within sandbox \"35fe684a1c407475f6ca28d99bfa68942b1f8331cfef939fdf1af906092a9edd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc1820969ee494cc437398d5b99ab6acd2277316ddfb2b3839a01a8187e3c29d\"" Mar 12 00:44:33.286604 containerd[1719]: time="2026-03-12T00:44:33.285583746Z" level=info msg="StartContainer for \"dc1820969ee494cc437398d5b99ab6acd2277316ddfb2b3839a01a8187e3c29d\"" Mar 12 00:44:33.308534 systemd[1]: Started cri-containerd-1882e1ab5743ecce96bf0818340e64e75a501cb8e74bb72847057c5853174b7f.scope - libcontainer container 1882e1ab5743ecce96bf0818340e64e75a501cb8e74bb72847057c5853174b7f. Mar 12 00:44:33.321530 systemd[1]: Started cri-containerd-dc1820969ee494cc437398d5b99ab6acd2277316ddfb2b3839a01a8187e3c29d.scope - libcontainer container dc1820969ee494cc437398d5b99ab6acd2277316ddfb2b3839a01a8187e3c29d. Mar 12 00:44:33.340474 containerd[1719]: time="2026-03-12T00:44:33.340425744Z" level=info msg="StartContainer for \"4d81b53cc4ac4ef3efef1f5d9b03d242cce18dc7cf693e6f8cba567a3622274f\" returns successfully" Mar 12 00:44:33.368658 containerd[1719]: time="2026-03-12T00:44:33.368566803Z" level=info msg="StartContainer for \"1882e1ab5743ecce96bf0818340e64e75a501cb8e74bb72847057c5853174b7f\" returns successfully" Mar 12 00:44:33.406699 containerd[1719]: time="2026-03-12T00:44:33.406534949Z" level=info msg="StartContainer for \"dc1820969ee494cc437398d5b99ab6acd2277316ddfb2b3839a01a8187e3c29d\" returns successfully" Mar 12 00:44:33.915151 kubelet[2773]: E0312 00:44:33.915122 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:33.917675 kubelet[2773]: E0312 00:44:33.917655 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:33.919770 kubelet[2773]: E0312 00:44:33.919637 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:34.161647 kubelet[2773]: I0312 00:44:34.161619 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:34.922871 kubelet[2773]: E0312 00:44:34.922841 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:34.923184 kubelet[2773]: E0312 00:44:34.923168 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.122943 kubelet[2773]: E0312 00:44:36.122796 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.761608 kubelet[2773]: E0312 00:44:36.761574 2773 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-d10d02cd33\" not found" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.857050 kubelet[2773]: I0312 00:44:36.857017 2773 apiserver.go:52] "Watching apiserver" Mar 12 00:44:36.859389 kubelet[2773]: I0312 00:44:36.858884 2773 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.859389 kubelet[2773]: E0312 00:44:36.858918 2773 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-d10d02cd33\": node \"ci-4081.3.6-n-d10d02cd33\" not found" Mar 12 00:44:36.866695 kubelet[2773]: I0312 00:44:36.866659 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 00:44:36.869804 kubelet[2773]: I0312 00:44:36.869779 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.913592 kubelet[2773]: E0312 00:44:36.913561 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.913592 kubelet[2773]: I0312 00:44:36.913588 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.915315 kubelet[2773]: E0312 00:44:36.915283 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.915354 kubelet[2773]: I0312 00:44:36.915316 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:36.916797 kubelet[2773]: E0312 00:44:36.916777 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d10d02cd33\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:37.320864 kubelet[2773]: I0312 00:44:37.320835 2773 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:37.323471 kubelet[2773]: E0312 00:44:37.323448 2773 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:38.822267 systemd[1]: Reloading requested from client PID 3052 ('systemctl') (unit session-9.scope)... Mar 12 00:44:38.822638 systemd[1]: Reloading... Mar 12 00:44:38.930537 zram_generator::config[3090]: No configuration found. Mar 12 00:44:39.059770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 00:44:39.148190 systemd[1]: Reloading finished in 325 ms. Mar 12 00:44:39.181788 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:39.196415 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 00:44:39.196684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:39.196747 systemd[1]: kubelet.service: Consumed 1.335s CPU time, 131.0M memory peak, 0B memory swap peak. Mar 12 00:44:39.203737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 00:44:39.303004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 00:44:39.307275 (kubelet)[3156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 00:44:39.342997 kubelet[3156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 00:44:39.342997 kubelet[3156]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 00:44:39.342997 kubelet[3156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 00:44:39.342997 kubelet[3156]: I0312 00:44:39.342976 3156 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 00:44:39.347936 kubelet[3156]: I0312 00:44:39.347906 3156 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 00:44:39.347936 kubelet[3156]: I0312 00:44:39.347931 3156 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 00:44:39.348146 kubelet[3156]: I0312 00:44:39.348129 3156 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 00:44:39.349389 kubelet[3156]: I0312 00:44:39.349342 3156 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 00:44:39.351748 kubelet[3156]: I0312 00:44:39.351530 3156 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 00:44:39.356161 kubelet[3156]: E0312 00:44:39.356094 3156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 00:44:39.356271 kubelet[3156]: I0312 00:44:39.356259 3156 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 00:44:39.358914 kubelet[3156]: I0312 00:44:39.358898 3156 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 00:44:39.359280 kubelet[3156]: I0312 00:44:39.359178 3156 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 00:44:39.359550 kubelet[3156]: I0312 00:44:39.359204 3156 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d10d02cd33","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 00:44:39.359550 kubelet[3156]: I0312 00:44:39.359483 3156 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 00:44:39.359550 kubelet[3156]: I0312 00:44:39.359493 3156 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 00:44:39.359771 kubelet[3156]: I0312 00:44:39.359630 3156 state_mem.go:36] "Initialized new in-memory state store" Mar 12 00:44:39.360067 kubelet[3156]: I0312 00:44:39.359972 3156 kubelet.go:480] "Attempting to sync node with API server" Mar 12 00:44:39.360067 kubelet[3156]: I0312 00:44:39.359989 3156 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 00:44:39.360067 kubelet[3156]: I0312 00:44:39.360012 3156 kubelet.go:386] "Adding apiserver pod source" Mar 12 00:44:39.360067 kubelet[3156]: I0312 00:44:39.360026 3156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 00:44:39.362922 kubelet[3156]: I0312 00:44:39.362901 3156 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 00:44:39.364285 kubelet[3156]: I0312 00:44:39.364265 3156 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 00:44:39.368371 kubelet[3156]: I0312 00:44:39.368352 3156 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 00:44:39.368453 kubelet[3156]: I0312 00:44:39.368400 3156 server.go:1289] "Started kubelet" Mar 12 00:44:39.371084 kubelet[3156]: I0312 00:44:39.371064 3156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 00:44:39.383242 kubelet[3156]: I0312 00:44:39.383185 3156 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 00:44:39.385354 kubelet[3156]: I0312 00:44:39.385036 3156 server.go:317] "Adding debug handlers to kubelet server" Mar 12 00:44:39.392064 kubelet[3156]: I0312 00:44:39.389779 3156 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 00:44:39.392064 kubelet[3156]: I0312 00:44:39.389976 3156 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 00:44:39.392064 kubelet[3156]: I0312 00:44:39.390640 3156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 00:44:39.392064 kubelet[3156]: I0312 00:44:39.391966 3156 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 00:44:39.392220 kubelet[3156]: E0312 00:44:39.392180 3156 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d10d02cd33\" not found" Mar 12 00:44:39.395923 kubelet[3156]: I0312 00:44:39.394793 3156 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 00:44:39.395923 kubelet[3156]: I0312 00:44:39.394942 3156 reconciler.go:26] "Reconciler: start to sync state" Mar 12 00:44:39.398459 kubelet[3156]: I0312 00:44:39.398427 3156 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 00:44:39.399996 kubelet[3156]: I0312 00:44:39.399921 3156 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 00:44:39.399996 kubelet[3156]: I0312 00:44:39.399945 3156 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 00:44:39.399996 kubelet[3156]: I0312 00:44:39.399968 3156 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 00:44:39.399996 kubelet[3156]: I0312 00:44:39.399975 3156 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 00:44:39.400119 kubelet[3156]: E0312 00:44:39.400024 3156 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 00:44:39.403960 kubelet[3156]: I0312 00:44:39.400940 3156 factory.go:223] Registration of the systemd container factory successfully Mar 12 00:44:39.403960 kubelet[3156]: I0312 00:44:39.401026 3156 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 00:44:39.414097 kubelet[3156]: E0312 00:44:39.414073 3156 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 00:44:39.414646 kubelet[3156]: I0312 00:44:39.414606 3156 factory.go:223] Registration of the containerd container factory successfully Mar 12 00:44:39.453680 kubelet[3156]: I0312 00:44:39.453650 3156 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 00:44:39.453680 kubelet[3156]: I0312 00:44:39.453668 3156 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 00:44:39.453680 kubelet[3156]: I0312 00:44:39.453688 3156 state_mem.go:36] "Initialized new in-memory state store" Mar 12 00:44:39.453843 kubelet[3156]: I0312 00:44:39.453807 3156 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 00:44:39.453843 kubelet[3156]: I0312 00:44:39.453816 3156 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 00:44:39.453843 kubelet[3156]: I0312 00:44:39.453835 3156 policy_none.go:49] "None policy: Start" Mar 12 00:44:39.453843 kubelet[3156]: I0312 00:44:39.453843 3156 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 00:44:39.453917 kubelet[3156]: I0312 00:44:39.453851 3156 state_mem.go:35] "Initializing new in-memory state store" Mar 12 00:44:39.453938 kubelet[3156]: I0312 00:44:39.453931 3156 state_mem.go:75] "Updated machine memory state" Mar 12 00:44:39.458519 kubelet[3156]: E0312 00:44:39.457135 3156 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 00:44:39.458519 kubelet[3156]: I0312 00:44:39.458014 3156 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 00:44:39.458519 kubelet[3156]: I0312 00:44:39.458027 3156 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 00:44:39.458519 kubelet[3156]: I0312 00:44:39.458349 3156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 00:44:39.459831 kubelet[3156]: E0312 00:44:39.459240 3156 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 00:44:39.501555 kubelet[3156]: I0312 00:44:39.501207 3156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.501555 kubelet[3156]: I0312 00:44:39.501251 3156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.502707 kubelet[3156]: I0312 00:44:39.502683 3156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.513794 kubelet[3156]: I0312 00:44:39.513754 3156 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 00:44:39.518980 kubelet[3156]: I0312 00:44:39.518694 3156 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 00:44:39.518980 kubelet[3156]: I0312 00:44:39.518755 3156 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 00:44:39.560839 kubelet[3156]: I0312 00:44:39.560810 3156 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.582317 kubelet[3156]: I0312 00:44:39.582277 3156 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.582472 kubelet[3156]: I0312 00:44:39.582365 3156 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.595955 kubelet[3156]: I0312 00:44:39.595732 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.595955 kubelet[3156]: I0312 00:44:39.595764 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.595955 kubelet[3156]: I0312 00:44:39.595783 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.595955 kubelet[3156]: I0312 00:44:39.595799 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.595955 kubelet[3156]: I0312 00:44:39.595815 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f599e0b80efa697b64292fc49d539189-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" (UID: \"f599e0b80efa697b64292fc49d539189\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.596209 kubelet[3156]: I0312 00:44:39.595829 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.596209 kubelet[3156]: I0312 00:44:39.595845 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.596209 kubelet[3156]: I0312 00:44:39.595860 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5ca196ba85fedbc953124feea33a907-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" (UID: \"e5ca196ba85fedbc953124feea33a907\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:39.596209 kubelet[3156]: I0312 00:44:39.595875 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2632cc1885bada80a77634d9a874c86d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d10d02cd33\" (UID: \"2632cc1885bada80a77634d9a874c86d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:40.361336 kubelet[3156]: I0312 00:44:40.361291 3156 apiserver.go:52] "Watching apiserver" Mar 12 00:44:40.395502 kubelet[3156]: I0312 00:44:40.395457 3156 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 00:44:40.430742 kubelet[3156]: I0312 00:44:40.429883 3156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:40.431756 kubelet[3156]: I0312 00:44:40.431736 3156 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:40.446668 kubelet[3156]: I0312 00:44:40.446637 3156 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 00:44:40.446798 kubelet[3156]: E0312 00:44:40.446702 3156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d10d02cd33\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:40.450506 kubelet[3156]: I0312 00:44:40.450479 3156 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 00:44:40.450601 kubelet[3156]: E0312 00:44:40.450529 3156 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d10d02cd33\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" Mar 12 00:44:40.472890 kubelet[3156]: I0312 00:44:40.472834 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d10d02cd33" podStartSLOduration=1.472818124 podStartE2EDuration="1.472818124s" podCreationTimestamp="2026-03-12 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:44:40.472449763 +0000 UTC m=+1.161537912" watchObservedRunningTime="2026-03-12 00:44:40.472818124 +0000 UTC m=+1.161906273" Mar 12 00:44:40.473017 kubelet[3156]: I0312 00:44:40.472926 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d10d02cd33" podStartSLOduration=1.472922044 podStartE2EDuration="1.472922044s" podCreationTimestamp="2026-03-12 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:44:40.457436312 +0000 UTC m=+1.146524501" watchObservedRunningTime="2026-03-12 00:44:40.472922044 +0000 UTC m=+1.162010233" Mar 12 00:44:40.504370 kubelet[3156]: I0312 00:44:40.504324 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d10d02cd33" podStartSLOduration=1.504302587 podStartE2EDuration="1.504302587s" podCreationTimestamp="2026-03-12 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:44:40.485452013 +0000 UTC m=+1.174540202" watchObservedRunningTime="2026-03-12 00:44:40.504302587 +0000 UTC m=+1.193390776" Mar 12 00:44:44.531598 kubelet[3156]: I0312 00:44:44.531547 3156 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 00:44:44.532957 containerd[1719]: time="2026-03-12T00:44:44.532149367Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 00:44:44.533478 kubelet[3156]: I0312 00:44:44.532723 3156 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 00:44:45.609917 systemd[1]: Created slice kubepods-besteffort-pod67afd5a2_cdf4_4883_a8d3_d2619053fe04.slice - libcontainer container kubepods-besteffort-pod67afd5a2_cdf4_4883_a8d3_d2619053fe04.slice. Mar 12 00:44:45.629516 kubelet[3156]: I0312 00:44:45.629478 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67afd5a2-cdf4-4883-a8d3-d2619053fe04-xtables-lock\") pod \"kube-proxy-69zgt\" (UID: \"67afd5a2-cdf4-4883-a8d3-d2619053fe04\") " pod="kube-system/kube-proxy-69zgt" Mar 12 00:44:45.630038 kubelet[3156]: I0312 00:44:45.629896 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx5jv\" (UniqueName: \"kubernetes.io/projected/67afd5a2-cdf4-4883-a8d3-d2619053fe04-kube-api-access-sx5jv\") pod \"kube-proxy-69zgt\" (UID: \"67afd5a2-cdf4-4883-a8d3-d2619053fe04\") " pod="kube-system/kube-proxy-69zgt" Mar 12 00:44:45.630038 kubelet[3156]: I0312 00:44:45.629963 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67afd5a2-cdf4-4883-a8d3-d2619053fe04-kube-proxy\") pod \"kube-proxy-69zgt\" (UID: \"67afd5a2-cdf4-4883-a8d3-d2619053fe04\") " pod="kube-system/kube-proxy-69zgt" Mar 12 00:44:45.630038 kubelet[3156]: I0312 00:44:45.629984 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67afd5a2-cdf4-4883-a8d3-d2619053fe04-lib-modules\") pod \"kube-proxy-69zgt\" (UID: \"67afd5a2-cdf4-4883-a8d3-d2619053fe04\") " pod="kube-system/kube-proxy-69zgt" Mar 12 00:44:45.789769 systemd[1]: Created slice kubepods-besteffort-pod5e5dc9f2_6984_47bf_8fc6_20a61c642274.slice - libcontainer container kubepods-besteffort-pod5e5dc9f2_6984_47bf_8fc6_20a61c642274.slice. Mar 12 00:44:45.832244 kubelet[3156]: I0312 00:44:45.832143 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6fmf\" (UniqueName: \"kubernetes.io/projected/5e5dc9f2-6984-47bf-8fc6-20a61c642274-kube-api-access-j6fmf\") pod \"tigera-operator-6bf85f8dd-2kwcg\" (UID: \"5e5dc9f2-6984-47bf-8fc6-20a61c642274\") " pod="tigera-operator/tigera-operator-6bf85f8dd-2kwcg" Mar 12 00:44:45.832244 kubelet[3156]: I0312 00:44:45.832184 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e5dc9f2-6984-47bf-8fc6-20a61c642274-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-2kwcg\" (UID: \"5e5dc9f2-6984-47bf-8fc6-20a61c642274\") " pod="tigera-operator/tigera-operator-6bf85f8dd-2kwcg" Mar 12 00:44:45.918539 containerd[1719]: time="2026-03-12T00:44:45.918430224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-69zgt,Uid:67afd5a2-cdf4-4883-a8d3-d2619053fe04,Namespace:kube-system,Attempt:0,}" Mar 12 00:44:45.961064 containerd[1719]: time="2026-03-12T00:44:45.960509976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:44:45.961064 containerd[1719]: time="2026-03-12T00:44:45.960562696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:44:45.961064 containerd[1719]: time="2026-03-12T00:44:45.960577696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:45.961064 containerd[1719]: time="2026-03-12T00:44:45.960645936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:45.984671 systemd[1]: Started cri-containerd-9368276137c88b7ea282f8069a2d732e9fbcb51778998910ac5a3e483ed4dccc.scope - libcontainer container 9368276137c88b7ea282f8069a2d732e9fbcb51778998910ac5a3e483ed4dccc. Mar 12 00:44:46.003263 containerd[1719]: time="2026-03-12T00:44:46.003069169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-69zgt,Uid:67afd5a2-cdf4-4883-a8d3-d2619053fe04,Namespace:kube-system,Attempt:0,} returns sandbox id \"9368276137c88b7ea282f8069a2d732e9fbcb51778998910ac5a3e483ed4dccc\"" Mar 12 00:44:46.010838 containerd[1719]: time="2026-03-12T00:44:46.010728775Z" level=info msg="CreateContainer within sandbox \"9368276137c88b7ea282f8069a2d732e9fbcb51778998910ac5a3e483ed4dccc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 00:44:46.041487 containerd[1719]: time="2026-03-12T00:44:46.041432278Z" level=info msg="CreateContainer within sandbox \"9368276137c88b7ea282f8069a2d732e9fbcb51778998910ac5a3e483ed4dccc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9bf66334bd184008f2a15a61ef29891e954eeb92c3e06c29e6b2dcd32952384b\"" Mar 12 00:44:46.042398 containerd[1719]: time="2026-03-12T00:44:46.042135478Z" level=info msg="StartContainer for \"9bf66334bd184008f2a15a61ef29891e954eeb92c3e06c29e6b2dcd32952384b\"" Mar 12 00:44:46.062558 systemd[1]: Started cri-containerd-9bf66334bd184008f2a15a61ef29891e954eeb92c3e06c29e6b2dcd32952384b.scope - libcontainer container 9bf66334bd184008f2a15a61ef29891e954eeb92c3e06c29e6b2dcd32952384b. Mar 12 00:44:46.090415 containerd[1719]: time="2026-03-12T00:44:46.090363355Z" level=info msg="StartContainer for \"9bf66334bd184008f2a15a61ef29891e954eeb92c3e06c29e6b2dcd32952384b\" returns successfully" Mar 12 00:44:46.094262 containerd[1719]: time="2026-03-12T00:44:46.094200638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-2kwcg,Uid:5e5dc9f2-6984-47bf-8fc6-20a61c642274,Namespace:tigera-operator,Attempt:0,}" Mar 12 00:44:46.128647 containerd[1719]: time="2026-03-12T00:44:46.128558384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:44:46.128815 containerd[1719]: time="2026-03-12T00:44:46.128615544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:44:46.128815 containerd[1719]: time="2026-03-12T00:44:46.128632464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:46.128815 containerd[1719]: time="2026-03-12T00:44:46.128706865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:44:46.147530 systemd[1]: Started cri-containerd-d7abc3ae055b4331895955de54227013b77bc0e53435d165167caf90ab7b5492.scope - libcontainer container d7abc3ae055b4331895955de54227013b77bc0e53435d165167caf90ab7b5492. Mar 12 00:44:46.186125 containerd[1719]: time="2026-03-12T00:44:46.185151748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-2kwcg,Uid:5e5dc9f2-6984-47bf-8fc6-20a61c642274,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7abc3ae055b4331895955de54227013b77bc0e53435d165167caf90ab7b5492\"" Mar 12 00:44:46.188039 containerd[1719]: time="2026-03-12T00:44:46.188004550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 00:44:48.089604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298577462.mount: Deactivated successfully. Mar 12 00:44:50.093768 kubelet[3156]: I0312 00:44:50.093178 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-69zgt" podStartSLOduration=5.093162248 podStartE2EDuration="5.093162248s" podCreationTimestamp="2026-03-12 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:44:46.453875472 +0000 UTC m=+7.142963661" watchObservedRunningTime="2026-03-12 00:44:50.093162248 +0000 UTC m=+10.782250437" Mar 12 00:44:51.636128 containerd[1719]: time="2026-03-12T00:44:51.636078024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:51.639041 containerd[1719]: time="2026-03-12T00:44:51.638823626Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 12 00:44:51.641559 containerd[1719]: time="2026-03-12T00:44:51.641534428Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:51.646483 containerd[1719]: time="2026-03-12T00:44:51.646198112Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:44:51.646982 containerd[1719]: time="2026-03-12T00:44:51.646955152Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 5.458913722s" Mar 12 00:44:51.647030 containerd[1719]: time="2026-03-12T00:44:51.646983552Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 12 00:44:51.653709 containerd[1719]: time="2026-03-12T00:44:51.653672278Z" level=info msg="CreateContainer within sandbox \"d7abc3ae055b4331895955de54227013b77bc0e53435d165167caf90ab7b5492\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 00:44:51.678182 containerd[1719]: time="2026-03-12T00:44:51.678070216Z" level=info msg="CreateContainer within sandbox \"d7abc3ae055b4331895955de54227013b77bc0e53435d165167caf90ab7b5492\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a29791c4032979d9cc62ccff6c580d9208031590d51f9b448fa701b9fef6200\"" Mar 12 00:44:51.680245 containerd[1719]: time="2026-03-12T00:44:51.680116618Z" level=info msg="StartContainer for \"6a29791c4032979d9cc62ccff6c580d9208031590d51f9b448fa701b9fef6200\"" Mar 12 00:44:51.713533 systemd[1]: Started cri-containerd-6a29791c4032979d9cc62ccff6c580d9208031590d51f9b448fa701b9fef6200.scope - libcontainer container 6a29791c4032979d9cc62ccff6c580d9208031590d51f9b448fa701b9fef6200. Mar 12 00:44:51.739791 containerd[1719]: time="2026-03-12T00:44:51.739723903Z" level=info msg="StartContainer for \"6a29791c4032979d9cc62ccff6c580d9208031590d51f9b448fa701b9fef6200\" returns successfully" Mar 12 00:44:52.466544 kubelet[3156]: I0312 00:44:52.466327 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-2kwcg" podStartSLOduration=2.005779573 podStartE2EDuration="7.466312617s" podCreationTimestamp="2026-03-12 00:44:45 +0000 UTC" firstStartedPulling="2026-03-12 00:44:46.187276949 +0000 UTC m=+6.876365098" lastFinishedPulling="2026-03-12 00:44:51.647809953 +0000 UTC m=+12.336898142" observedRunningTime="2026-03-12 00:44:52.466096177 +0000 UTC m=+13.155184366" watchObservedRunningTime="2026-03-12 00:44:52.466312617 +0000 UTC m=+13.155400806" Mar 12 00:44:57.418447 sudo[2224]: pam_unix(sudo:session): session closed for user root Mar 12 00:44:57.491131 sshd[2221]: pam_unix(sshd:session): session closed for user core Mar 12 00:44:57.494296 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:54652.service: Deactivated successfully. Mar 12 00:44:57.498342 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 00:44:57.498583 systemd[1]: session-9.scope: Consumed 5.953s CPU time, 152.7M memory peak, 0B memory swap peak. Mar 12 00:44:57.500400 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Mar 12 00:44:57.504050 systemd-logind[1701]: Removed session 9. Mar 12 00:45:01.916145 systemd[1]: Created slice kubepods-besteffort-pod7adeeb2a_f441_408d_8f12_4f8e6b20b4ff.slice - libcontainer container kubepods-besteffort-pod7adeeb2a_f441_408d_8f12_4f8e6b20b4ff.slice. Mar 12 00:45:02.023307 kubelet[3156]: I0312 00:45:02.023265 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adeeb2a-f441-408d-8f12-4f8e6b20b4ff-tigera-ca-bundle\") pod \"calico-typha-fd57db68f-85klq\" (UID: \"7adeeb2a-f441-408d-8f12-4f8e6b20b4ff\") " pod="calico-system/calico-typha-fd57db68f-85klq" Mar 12 00:45:02.023307 kubelet[3156]: I0312 00:45:02.023303 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7adeeb2a-f441-408d-8f12-4f8e6b20b4ff-typha-certs\") pod \"calico-typha-fd57db68f-85klq\" (UID: \"7adeeb2a-f441-408d-8f12-4f8e6b20b4ff\") " pod="calico-system/calico-typha-fd57db68f-85klq" Mar 12 00:45:02.023700 kubelet[3156]: I0312 00:45:02.023326 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzj4\" (UniqueName: \"kubernetes.io/projected/7adeeb2a-f441-408d-8f12-4f8e6b20b4ff-kube-api-access-vfzj4\") pod \"calico-typha-fd57db68f-85klq\" (UID: \"7adeeb2a-f441-408d-8f12-4f8e6b20b4ff\") " pod="calico-system/calico-typha-fd57db68f-85klq" Mar 12 00:45:02.038900 systemd[1]: Created slice kubepods-besteffort-pod484282d3_3e51_4938_aa66_86e7a73fb048.slice - libcontainer container kubepods-besteffort-pod484282d3_3e51_4938_aa66_86e7a73fb048.slice. Mar 12 00:45:02.124200 kubelet[3156]: I0312 00:45:02.124152 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-var-run-calico\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124200 kubelet[3156]: I0312 00:45:02.124197 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-cni-log-dir\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124364 kubelet[3156]: I0312 00:45:02.124274 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-cni-net-dir\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124364 kubelet[3156]: I0312 00:45:02.124293 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-flexvol-driver-host\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124364 kubelet[3156]: I0312 00:45:02.124311 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-xtables-lock\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124364 kubelet[3156]: I0312 00:45:02.124338 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-policysync\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124364 kubelet[3156]: I0312 00:45:02.124354 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/484282d3-3e51-4938-aa66-86e7a73fb048-node-certs\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124520 kubelet[3156]: I0312 00:45:02.124368 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-var-lib-calico\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124520 kubelet[3156]: I0312 00:45:02.124394 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb826\" (UniqueName: \"kubernetes.io/projected/484282d3-3e51-4938-aa66-86e7a73fb048-kube-api-access-fb826\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124520 kubelet[3156]: I0312 00:45:02.124423 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484282d3-3e51-4938-aa66-86e7a73fb048-tigera-ca-bundle\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124520 kubelet[3156]: I0312 00:45:02.124447 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-bpffs\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124520 kubelet[3156]: I0312 00:45:02.124460 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-nodeproc\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124627 kubelet[3156]: I0312 00:45:02.124477 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-lib-modules\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124627 kubelet[3156]: I0312 00:45:02.124496 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-cni-bin-dir\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.124627 kubelet[3156]: I0312 00:45:02.124511 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/484282d3-3e51-4938-aa66-86e7a73fb048-sys-fs\") pod \"calico-node-tpfg4\" (UID: \"484282d3-3e51-4938-aa66-86e7a73fb048\") " pod="calico-system/calico-node-tpfg4" Mar 12 00:45:02.151696 kubelet[3156]: E0312 00:45:02.151417 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:02.221588 containerd[1719]: time="2026-03-12T00:45:02.221547732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd57db68f-85klq,Uid:7adeeb2a-f441-408d-8f12-4f8e6b20b4ff,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:02.226416 kubelet[3156]: I0312 00:45:02.224774 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe7afae4-b713-43a4-990f-e7b7f89a4386-registration-dir\") pod \"csi-node-driver-b7676\" (UID: \"fe7afae4-b713-43a4-990f-e7b7f89a4386\") " pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:02.226416 kubelet[3156]: I0312 00:45:02.224809 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe7afae4-b713-43a4-990f-e7b7f89a4386-varrun\") pod \"csi-node-driver-b7676\" (UID: \"fe7afae4-b713-43a4-990f-e7b7f89a4386\") " pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:02.226416 kubelet[3156]: I0312 00:45:02.224926 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plw5s\" (UniqueName: \"kubernetes.io/projected/fe7afae4-b713-43a4-990f-e7b7f89a4386-kube-api-access-plw5s\") pod \"csi-node-driver-b7676\" (UID: \"fe7afae4-b713-43a4-990f-e7b7f89a4386\") " pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:02.226416 kubelet[3156]: I0312 00:45:02.224953 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe7afae4-b713-43a4-990f-e7b7f89a4386-kubelet-dir\") pod \"csi-node-driver-b7676\" (UID: \"fe7afae4-b713-43a4-990f-e7b7f89a4386\") " pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:02.226416 kubelet[3156]: I0312 00:45:02.224969 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe7afae4-b713-43a4-990f-e7b7f89a4386-socket-dir\") pod \"csi-node-driver-b7676\" (UID: \"fe7afae4-b713-43a4-990f-e7b7f89a4386\") " pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:02.227565 kubelet[3156]: E0312 00:45:02.227298 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.227565 kubelet[3156]: W0312 00:45:02.227317 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.227565 kubelet[3156]: E0312 00:45:02.227335 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.227825 kubelet[3156]: E0312 00:45:02.227771 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.227825 kubelet[3156]: W0312 00:45:02.227781 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.227952 kubelet[3156]: E0312 00:45:02.227794 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.228465 kubelet[3156]: E0312 00:45:02.228294 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.228641 kubelet[3156]: W0312 00:45:02.228562 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.228641 kubelet[3156]: E0312 00:45:02.228583 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.229053 kubelet[3156]: E0312 00:45:02.229029 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.229305 kubelet[3156]: W0312 00:45:02.229042 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.229305 kubelet[3156]: E0312 00:45:02.229269 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.229915 kubelet[3156]: E0312 00:45:02.229739 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.229915 kubelet[3156]: W0312 00:45:02.229754 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.229915 kubelet[3156]: E0312 00:45:02.229768 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.232473 kubelet[3156]: E0312 00:45:02.232448 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.232473 kubelet[3156]: W0312 00:45:02.232468 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.232786 kubelet[3156]: E0312 00:45:02.232485 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.234646 kubelet[3156]: E0312 00:45:02.234575 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.234646 kubelet[3156]: W0312 00:45:02.234593 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.234646 kubelet[3156]: E0312 00:45:02.234607 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.237178 kubelet[3156]: E0312 00:45:02.236971 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.237178 kubelet[3156]: W0312 00:45:02.236988 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.237178 kubelet[3156]: E0312 00:45:02.237001 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.238756 kubelet[3156]: E0312 00:45:02.238477 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.238756 kubelet[3156]: W0312 00:45:02.238494 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.238756 kubelet[3156]: E0312 00:45:02.238508 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.238942 kubelet[3156]: E0312 00:45:02.238800 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.238942 kubelet[3156]: W0312 00:45:02.238811 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.238942 kubelet[3156]: E0312 00:45:02.238822 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.240198 kubelet[3156]: E0312 00:45:02.240175 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.240198 kubelet[3156]: W0312 00:45:02.240193 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.240297 kubelet[3156]: E0312 00:45:02.240206 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.240501 kubelet[3156]: E0312 00:45:02.240482 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.240501 kubelet[3156]: W0312 00:45:02.240497 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.240635 kubelet[3156]: E0312 00:45:02.240508 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.240704 kubelet[3156]: E0312 00:45:02.240669 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.240704 kubelet[3156]: W0312 00:45:02.240678 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.240704 kubelet[3156]: E0312 00:45:02.240687 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.241080 kubelet[3156]: E0312 00:45:02.241061 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.241080 kubelet[3156]: W0312 00:45:02.241078 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.241191 kubelet[3156]: E0312 00:45:02.241090 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.261514 kubelet[3156]: E0312 00:45:02.261475 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.261514 kubelet[3156]: W0312 00:45:02.261506 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.261669 kubelet[3156]: E0312 00:45:02.261525 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.278661 containerd[1719]: time="2026-03-12T00:45:02.278571898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:02.278661 containerd[1719]: time="2026-03-12T00:45:02.278618898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:02.278661 containerd[1719]: time="2026-03-12T00:45:02.278629538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:02.279065 containerd[1719]: time="2026-03-12T00:45:02.278694658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:02.303721 systemd[1]: Started cri-containerd-baadcdec7b0044ee9f9a08b129a23ae37b060d2f6d0943316fcc725285491d73.scope - libcontainer container baadcdec7b0044ee9f9a08b129a23ae37b060d2f6d0943316fcc725285491d73. Mar 12 00:45:02.325993 kubelet[3156]: E0312 00:45:02.325951 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.326169 kubelet[3156]: W0312 00:45:02.326153 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.326412 kubelet[3156]: E0312 00:45:02.326244 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.326657 kubelet[3156]: E0312 00:45:02.326642 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.327217 kubelet[3156]: W0312 00:45:02.327137 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.327217 kubelet[3156]: E0312 00:45:02.327168 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.327573 kubelet[3156]: E0312 00:45:02.327554 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.327573 kubelet[3156]: W0312 00:45:02.327569 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.327675 kubelet[3156]: E0312 00:45:02.327587 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.327876 kubelet[3156]: E0312 00:45:02.327797 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.327876 kubelet[3156]: W0312 00:45:02.327809 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.327876 kubelet[3156]: E0312 00:45:02.327826 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.328065 kubelet[3156]: E0312 00:45:02.328014 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.328065 kubelet[3156]: W0312 00:45:02.328026 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.328065 kubelet[3156]: E0312 00:45:02.328036 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.328332 kubelet[3156]: E0312 00:45:02.328271 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.328332 kubelet[3156]: W0312 00:45:02.328283 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.328445 kubelet[3156]: E0312 00:45:02.328342 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.329267 kubelet[3156]: E0312 00:45:02.329247 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.329267 kubelet[3156]: W0312 00:45:02.329264 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.329431 kubelet[3156]: E0312 00:45:02.329277 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.329807 kubelet[3156]: E0312 00:45:02.329792 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.329807 kubelet[3156]: W0312 00:45:02.329806 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.329954 kubelet[3156]: E0312 00:45:02.329832 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.330348 kubelet[3156]: E0312 00:45:02.330329 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.330348 kubelet[3156]: W0312 00:45:02.330350 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.330625 kubelet[3156]: E0312 00:45:02.330362 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.330700 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.331331 kubelet[3156]: W0312 00:45:02.330726 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.330737 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.330934 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.331331 kubelet[3156]: W0312 00:45:02.330945 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.331053 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.331248 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.331331 kubelet[3156]: W0312 00:45:02.331257 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.331331 kubelet[3156]: E0312 00:45:02.331267 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.332398 kubelet[3156]: E0312 00:45:02.331832 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.332398 kubelet[3156]: W0312 00:45:02.331845 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.332398 kubelet[3156]: E0312 00:45:02.331861 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.332398 kubelet[3156]: E0312 00:45:02.332147 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.332398 kubelet[3156]: W0312 00:45:02.332161 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.332398 kubelet[3156]: E0312 00:45:02.332173 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.332570 kubelet[3156]: E0312 00:45:02.332496 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.332570 kubelet[3156]: W0312 00:45:02.332512 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.332570 kubelet[3156]: E0312 00:45:02.332526 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.333241 kubelet[3156]: E0312 00:45:02.332867 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.333241 kubelet[3156]: W0312 00:45:02.332883 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.333241 kubelet[3156]: E0312 00:45:02.332907 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.333241 kubelet[3156]: E0312 00:45:02.333113 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.333241 kubelet[3156]: W0312 00:45:02.333123 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.333241 kubelet[3156]: E0312 00:45:02.333132 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.334705 kubelet[3156]: E0312 00:45:02.333929 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.334705 kubelet[3156]: W0312 00:45:02.333941 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.334705 kubelet[3156]: E0312 00:45:02.333954 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.334767 kubelet[3156]: E0312 00:45:02.334746 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.334767 kubelet[3156]: W0312 00:45:02.334760 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.334811 kubelet[3156]: E0312 00:45:02.334774 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.335190 kubelet[3156]: E0312 00:45:02.335052 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.335190 kubelet[3156]: W0312 00:45:02.335070 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.335190 kubelet[3156]: E0312 00:45:02.335081 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.336911 kubelet[3156]: E0312 00:45:02.335502 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.336911 kubelet[3156]: W0312 00:45:02.335516 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.336911 kubelet[3156]: E0312 00:45:02.335527 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.336911 kubelet[3156]: E0312 00:45:02.336737 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.336911 kubelet[3156]: W0312 00:45:02.336748 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.336911 kubelet[3156]: E0312 00:45:02.336759 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.336911 kubelet[3156]: E0312 00:45:02.336918 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.337111 kubelet[3156]: W0312 00:45:02.336928 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.337111 kubelet[3156]: E0312 00:45:02.336936 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.337111 kubelet[3156]: E0312 00:45:02.337050 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.337111 kubelet[3156]: W0312 00:45:02.337056 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.337111 kubelet[3156]: E0312 00:45:02.337064 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.337218 kubelet[3156]: E0312 00:45:02.337202 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.337218 kubelet[3156]: W0312 00:45:02.337209 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.337270 kubelet[3156]: E0312 00:45:02.337217 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.343826 containerd[1719]: time="2026-03-12T00:45:02.343487190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpfg4,Uid:484282d3-3e51-4938-aa66-86e7a73fb048,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:02.346989 kubelet[3156]: E0312 00:45:02.346969 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:02.347109 kubelet[3156]: W0312 00:45:02.347094 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:02.347192 kubelet[3156]: E0312 00:45:02.347180 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:02.348442 containerd[1719]: time="2026-03-12T00:45:02.348315954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd57db68f-85klq,Uid:7adeeb2a-f441-408d-8f12-4f8e6b20b4ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"baadcdec7b0044ee9f9a08b129a23ae37b060d2f6d0943316fcc725285491d73\"" Mar 12 00:45:02.353049 containerd[1719]: time="2026-03-12T00:45:02.352763958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 00:45:02.396566 containerd[1719]: time="2026-03-12T00:45:02.396459193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:02.397146 containerd[1719]: time="2026-03-12T00:45:02.396787113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:02.397146 containerd[1719]: time="2026-03-12T00:45:02.396808433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:02.397146 containerd[1719]: time="2026-03-12T00:45:02.397085233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:02.418559 systemd[1]: Started cri-containerd-9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0.scope - libcontainer container 9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0. Mar 12 00:45:02.520337 containerd[1719]: time="2026-03-12T00:45:02.518409171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpfg4,Uid:484282d3-3e51-4938-aa66-86e7a73fb048,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\"" Mar 12 00:45:03.402908 kubelet[3156]: E0312 00:45:03.402862 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:03.717819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621344929.mount: Deactivated successfully. Mar 12 00:45:04.875350 containerd[1719]: time="2026-03-12T00:45:04.874590471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:04.878016 containerd[1719]: time="2026-03-12T00:45:04.877984434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 12 00:45:04.880722 containerd[1719]: time="2026-03-12T00:45:04.880678076Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:04.885696 containerd[1719]: time="2026-03-12T00:45:04.885659640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:04.886409 containerd[1719]: time="2026-03-12T00:45:04.886284320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.533487442s" Mar 12 00:45:04.886409 containerd[1719]: time="2026-03-12T00:45:04.886314840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 12 00:45:04.887919 containerd[1719]: time="2026-03-12T00:45:04.887808202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 00:45:04.904816 containerd[1719]: time="2026-03-12T00:45:04.904769935Z" level=info msg="CreateContainer within sandbox \"baadcdec7b0044ee9f9a08b129a23ae37b060d2f6d0943316fcc725285491d73\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 00:45:04.938242 containerd[1719]: time="2026-03-12T00:45:04.938190562Z" level=info msg="CreateContainer within sandbox \"baadcdec7b0044ee9f9a08b129a23ae37b060d2f6d0943316fcc725285491d73\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d36141636d4d1e68ebc386aaa4a8735c41f818a7a7075c35f088073c3bb6800\"" Mar 12 00:45:04.939916 containerd[1719]: time="2026-03-12T00:45:04.938754843Z" level=info msg="StartContainer for \"7d36141636d4d1e68ebc386aaa4a8735c41f818a7a7075c35f088073c3bb6800\"" Mar 12 00:45:04.965531 systemd[1]: Started cri-containerd-7d36141636d4d1e68ebc386aaa4a8735c41f818a7a7075c35f088073c3bb6800.scope - libcontainer container 7d36141636d4d1e68ebc386aaa4a8735c41f818a7a7075c35f088073c3bb6800. Mar 12 00:45:05.000125 containerd[1719]: time="2026-03-12T00:45:04.999455692Z" level=info msg="StartContainer for \"7d36141636d4d1e68ebc386aaa4a8735c41f818a7a7075c35f088073c3bb6800\" returns successfully" Mar 12 00:45:05.401667 kubelet[3156]: E0312 00:45:05.401609 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:05.526673 kubelet[3156]: E0312 00:45:05.526642 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.526673 kubelet[3156]: W0312 00:45:05.526665 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.526854 kubelet[3156]: E0312 00:45:05.526687 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.526902 kubelet[3156]: E0312 00:45:05.526868 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.526926 kubelet[3156]: W0312 00:45:05.526876 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.526949 kubelet[3156]: E0312 00:45:05.526925 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527092 kubelet[3156]: E0312 00:45:05.527080 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527092 kubelet[3156]: W0312 00:45:05.527091 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527150 kubelet[3156]: E0312 00:45:05.527100 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527261 kubelet[3156]: E0312 00:45:05.527249 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527261 kubelet[3156]: W0312 00:45:05.527260 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527313 kubelet[3156]: E0312 00:45:05.527268 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527448 kubelet[3156]: E0312 00:45:05.527437 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527448 kubelet[3156]: W0312 00:45:05.527447 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527508 kubelet[3156]: E0312 00:45:05.527462 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527610 kubelet[3156]: E0312 00:45:05.527600 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527641 kubelet[3156]: W0312 00:45:05.527615 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527641 kubelet[3156]: E0312 00:45:05.527624 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527774 kubelet[3156]: E0312 00:45:05.527764 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527774 kubelet[3156]: W0312 00:45:05.527773 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527829 kubelet[3156]: E0312 00:45:05.527782 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.527929 kubelet[3156]: E0312 00:45:05.527918 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.527929 kubelet[3156]: W0312 00:45:05.527928 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.527984 kubelet[3156]: E0312 00:45:05.527938 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528104 kubelet[3156]: E0312 00:45:05.528094 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528104 kubelet[3156]: W0312 00:45:05.528104 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528159 kubelet[3156]: E0312 00:45:05.528112 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528254 kubelet[3156]: E0312 00:45:05.528243 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528254 kubelet[3156]: W0312 00:45:05.528253 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528309 kubelet[3156]: E0312 00:45:05.528261 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528415 kubelet[3156]: E0312 00:45:05.528404 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528415 kubelet[3156]: W0312 00:45:05.528415 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528475 kubelet[3156]: E0312 00:45:05.528423 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528582 kubelet[3156]: E0312 00:45:05.528568 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528582 kubelet[3156]: W0312 00:45:05.528579 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528639 kubelet[3156]: E0312 00:45:05.528587 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528745 kubelet[3156]: E0312 00:45:05.528734 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528745 kubelet[3156]: W0312 00:45:05.528744 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528797 kubelet[3156]: E0312 00:45:05.528752 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.528914 kubelet[3156]: E0312 00:45:05.528903 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.528914 kubelet[3156]: W0312 00:45:05.528913 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.528971 kubelet[3156]: E0312 00:45:05.528921 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.529080 kubelet[3156]: E0312 00:45:05.529069 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.529080 kubelet[3156]: W0312 00:45:05.529079 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.529125 kubelet[3156]: E0312 00:45:05.529086 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.555839 kubelet[3156]: E0312 00:45:05.555693 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.555839 kubelet[3156]: W0312 00:45:05.555713 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.555839 kubelet[3156]: E0312 00:45:05.555729 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.556241 kubelet[3156]: E0312 00:45:05.556116 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.556241 kubelet[3156]: W0312 00:45:05.556129 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.556241 kubelet[3156]: E0312 00:45:05.556141 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.556492 kubelet[3156]: E0312 00:45:05.556479 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.556691 kubelet[3156]: W0312 00:45:05.556557 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.556691 kubelet[3156]: E0312 00:45:05.556573 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.556883 kubelet[3156]: E0312 00:45:05.556872 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.557043 kubelet[3156]: W0312 00:45:05.556933 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.557043 kubelet[3156]: E0312 00:45:05.556948 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.557322 kubelet[3156]: E0312 00:45:05.557227 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.557322 kubelet[3156]: W0312 00:45:05.557239 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.557322 kubelet[3156]: E0312 00:45:05.557250 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.557691 kubelet[3156]: E0312 00:45:05.557564 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.557691 kubelet[3156]: W0312 00:45:05.557578 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.557691 kubelet[3156]: E0312 00:45:05.557589 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.557930 kubelet[3156]: E0312 00:45:05.557918 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.558076 kubelet[3156]: W0312 00:45:05.557962 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.558076 kubelet[3156]: E0312 00:45:05.557976 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.558594 kubelet[3156]: E0312 00:45:05.558572 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.558594 kubelet[3156]: W0312 00:45:05.558591 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.558605 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.558951 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.559951 kubelet[3156]: W0312 00:45:05.558962 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.558971 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.559329 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.559951 kubelet[3156]: W0312 00:45:05.559341 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.559352 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.559718 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.559951 kubelet[3156]: W0312 00:45:05.559733 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.559951 kubelet[3156]: E0312 00:45:05.559743 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.560239 kubelet[3156]: E0312 00:45:05.560220 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.560336 kubelet[3156]: W0312 00:45:05.560237 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.560369 kubelet[3156]: E0312 00:45:05.560340 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.560678 kubelet[3156]: E0312 00:45:05.560661 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.560678 kubelet[3156]: W0312 00:45:05.560675 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.560745 kubelet[3156]: E0312 00:45:05.560686 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.561297 kubelet[3156]: E0312 00:45:05.561113 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.561297 kubelet[3156]: W0312 00:45:05.561126 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.561297 kubelet[3156]: E0312 00:45:05.561137 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.561676 kubelet[3156]: E0312 00:45:05.561653 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.561676 kubelet[3156]: W0312 00:45:05.561671 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.561743 kubelet[3156]: E0312 00:45:05.561683 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.562133 kubelet[3156]: E0312 00:45:05.562021 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.562133 kubelet[3156]: W0312 00:45:05.562132 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.562201 kubelet[3156]: E0312 00:45:05.562144 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.562555 kubelet[3156]: E0312 00:45:05.562503 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.562555 kubelet[3156]: W0312 00:45:05.562516 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.562555 kubelet[3156]: E0312 00:45:05.562529 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:05.563017 kubelet[3156]: E0312 00:45:05.562997 3156 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 00:45:05.563072 kubelet[3156]: W0312 00:45:05.563025 3156 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 00:45:05.563072 kubelet[3156]: E0312 00:45:05.563039 3156 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 00:45:06.310064 containerd[1719]: time="2026-03-12T00:45:06.310010228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:06.312368 containerd[1719]: time="2026-03-12T00:45:06.312246750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 12 00:45:06.314908 containerd[1719]: time="2026-03-12T00:45:06.314882032Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:06.319479 containerd[1719]: time="2026-03-12T00:45:06.319423636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:06.320293 containerd[1719]: time="2026-03-12T00:45:06.320175597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.432331115s" Mar 12 00:45:06.320293 containerd[1719]: time="2026-03-12T00:45:06.320209237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 12 00:45:06.328133 containerd[1719]: time="2026-03-12T00:45:06.328098563Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 00:45:06.359247 containerd[1719]: time="2026-03-12T00:45:06.359203188Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96\"" Mar 12 00:45:06.360564 containerd[1719]: time="2026-03-12T00:45:06.360534829Z" level=info msg="StartContainer for \"701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96\"" Mar 12 00:45:06.395522 systemd[1]: Started cri-containerd-701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96.scope - libcontainer container 701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96. Mar 12 00:45:06.423698 containerd[1719]: time="2026-03-12T00:45:06.423632640Z" level=info msg="StartContainer for \"701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96\" returns successfully" Mar 12 00:45:06.430655 systemd[1]: cri-containerd-701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96.scope: Deactivated successfully. Mar 12 00:45:06.448538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96-rootfs.mount: Deactivated successfully. Mar 12 00:45:06.483155 kubelet[3156]: I0312 00:45:06.483053 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:45:06.508060 kubelet[3156]: I0312 00:45:06.507459 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fd57db68f-85klq" podStartSLOduration=2.972325584 podStartE2EDuration="5.507440788s" podCreationTimestamp="2026-03-12 00:45:01 +0000 UTC" firstStartedPulling="2026-03-12 00:45:02.352269757 +0000 UTC m=+23.041357906" lastFinishedPulling="2026-03-12 00:45:04.887384921 +0000 UTC m=+25.576473110" observedRunningTime="2026-03-12 00:45:05.49318633 +0000 UTC m=+26.182274519" watchObservedRunningTime="2026-03-12 00:45:06.507440788 +0000 UTC m=+27.196528937" Mar 12 00:45:07.401893 kubelet[3156]: E0312 00:45:07.401572 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:07.537697 containerd[1719]: time="2026-03-12T00:45:07.537596778Z" level=info msg="shim disconnected" id=701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96 namespace=k8s.io Mar 12 00:45:07.538233 containerd[1719]: time="2026-03-12T00:45:07.538079859Z" level=warning msg="cleaning up after shim disconnected" id=701f1cfc95dfd1f9dfa24f3955263f0e34e9fc0302dfba2c11637e6fb33c5b96 namespace=k8s.io Mar 12 00:45:07.538233 containerd[1719]: time="2026-03-12T00:45:07.538102419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 00:45:07.548403 containerd[1719]: time="2026-03-12T00:45:07.547467226Z" level=warning msg="cleanup warnings time=\"2026-03-12T00:45:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 00:45:08.490202 containerd[1719]: time="2026-03-12T00:45:08.490118226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 00:45:09.400970 kubelet[3156]: E0312 00:45:09.400630 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:11.402080 kubelet[3156]: E0312 00:45:11.402035 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:13.401923 kubelet[3156]: E0312 00:45:13.401584 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:15.401978 kubelet[3156]: E0312 00:45:15.401569 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:15.956924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871340630.mount: Deactivated successfully. Mar 12 00:45:16.010823 containerd[1719]: time="2026-03-12T00:45:16.010056599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:16.013172 containerd[1719]: time="2026-03-12T00:45:16.013139521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 12 00:45:16.016243 containerd[1719]: time="2026-03-12T00:45:16.016198523Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:16.019865 containerd[1719]: time="2026-03-12T00:45:16.019809486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:16.020617 containerd[1719]: time="2026-03-12T00:45:16.020481606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.53028086s" Mar 12 00:45:16.020617 containerd[1719]: time="2026-03-12T00:45:16.020520726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 12 00:45:16.027899 containerd[1719]: time="2026-03-12T00:45:16.027740172Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 00:45:16.061075 containerd[1719]: time="2026-03-12T00:45:16.061029197Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c\"" Mar 12 00:45:16.062084 containerd[1719]: time="2026-03-12T00:45:16.061735517Z" level=info msg="StartContainer for \"daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c\"" Mar 12 00:45:16.104550 systemd[1]: Started cri-containerd-daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c.scope - libcontainer container daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c. Mar 12 00:45:16.134692 containerd[1719]: time="2026-03-12T00:45:16.134550811Z" level=info msg="StartContainer for \"daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c\" returns successfully" Mar 12 00:45:16.172158 systemd[1]: cri-containerd-daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c.scope: Deactivated successfully. Mar 12 00:45:16.957031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c-rootfs.mount: Deactivated successfully. Mar 12 00:45:17.400572 kubelet[3156]: E0312 00:45:17.400464 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:17.647764 kubelet[3156]: I0312 00:45:17.647400 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:45:17.845536 containerd[1719]: time="2026-03-12T00:45:17.845277373Z" level=info msg="shim disconnected" id=daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c namespace=k8s.io Mar 12 00:45:17.845536 containerd[1719]: time="2026-03-12T00:45:17.845330933Z" level=warning msg="cleaning up after shim disconnected" id=daf21ecc1c1e5c0881658d2c8b9dbc8cad41960e7664fe63e5ab59edafa1ae3c namespace=k8s.io Mar 12 00:45:17.845536 containerd[1719]: time="2026-03-12T00:45:17.845338813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 00:45:18.515627 containerd[1719]: time="2026-03-12T00:45:18.514405955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 00:45:19.402811 kubelet[3156]: E0312 00:45:19.402313 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:21.402149 kubelet[3156]: E0312 00:45:21.402081 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:21.898109 containerd[1719]: time="2026-03-12T00:45:21.898059656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:21.900741 containerd[1719]: time="2026-03-12T00:45:21.900700778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 12 00:45:21.905234 containerd[1719]: time="2026-03-12T00:45:21.904033780Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:21.909180 containerd[1719]: time="2026-03-12T00:45:21.908011783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:21.909180 containerd[1719]: time="2026-03-12T00:45:21.908813264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.393417148s" Mar 12 00:45:21.909180 containerd[1719]: time="2026-03-12T00:45:21.908839824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 12 00:45:21.914678 containerd[1719]: time="2026-03-12T00:45:21.914650388Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 00:45:21.951458 containerd[1719]: time="2026-03-12T00:45:21.951366656Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d\"" Mar 12 00:45:21.953499 containerd[1719]: time="2026-03-12T00:45:21.953473337Z" level=info msg="StartContainer for \"4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d\"" Mar 12 00:45:21.981558 systemd[1]: Started cri-containerd-4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d.scope - libcontainer container 4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d. Mar 12 00:45:22.007845 containerd[1719]: time="2026-03-12T00:45:22.007735058Z" level=info msg="StartContainer for \"4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d\" returns successfully" Mar 12 00:45:23.254444 systemd[1]: cri-containerd-4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d.scope: Deactivated successfully. Mar 12 00:45:23.283024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d-rootfs.mount: Deactivated successfully. Mar 12 00:45:23.348949 kubelet[3156]: I0312 00:45:23.348917 3156 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 00:45:23.751490 systemd[1]: Created slice kubepods-burstable-pod41eb878a_d7e0_41f9_bfd8_96489d627e74.slice - libcontainer container kubepods-burstable-pod41eb878a_d7e0_41f9_bfd8_96489d627e74.slice. Mar 12 00:45:23.775869 kubelet[3156]: I0312 00:45:23.775830 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41eb878a-d7e0-41f9-bfd8-96489d627e74-config-volume\") pod \"coredns-674b8bbfcf-6th7z\" (UID: \"41eb878a-d7e0-41f9-bfd8-96489d627e74\") " pod="kube-system/coredns-674b8bbfcf-6th7z" Mar 12 00:45:23.775869 kubelet[3156]: I0312 00:45:23.775872 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw8v\" (UniqueName: \"kubernetes.io/projected/41eb878a-d7e0-41f9-bfd8-96489d627e74-kube-api-access-xzw8v\") pod \"coredns-674b8bbfcf-6th7z\" (UID: \"41eb878a-d7e0-41f9-bfd8-96489d627e74\") " pod="kube-system/coredns-674b8bbfcf-6th7z" Mar 12 00:45:24.138092 systemd[1]: Created slice kubepods-besteffort-podfe7afae4_b713_43a4_990f_e7b7f89a4386.slice - libcontainer container kubepods-besteffort-podfe7afae4_b713_43a4_990f_e7b7f89a4386.slice. Mar 12 00:45:24.179061 kubelet[3156]: I0312 00:45:24.178973 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8xp\" (UniqueName: \"kubernetes.io/projected/bb4b0ced-686e-4023-85d0-51713ff7caee-kube-api-access-gx8xp\") pod \"coredns-674b8bbfcf-m9kwb\" (UID: \"bb4b0ced-686e-4023-85d0-51713ff7caee\") " pod="kube-system/coredns-674b8bbfcf-m9kwb" Mar 12 00:45:24.179061 kubelet[3156]: I0312 00:45:24.179017 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb4b0ced-686e-4023-85d0-51713ff7caee-config-volume\") pod \"coredns-674b8bbfcf-m9kwb\" (UID: \"bb4b0ced-686e-4023-85d0-51713ff7caee\") " pod="kube-system/coredns-674b8bbfcf-m9kwb" Mar 12 00:45:24.180269 containerd[1719]: time="2026-03-12T00:45:24.180001809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b7676,Uid:fe7afae4-b713-43a4-990f-e7b7f89a4386,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.184733 containerd[1719]: time="2026-03-12T00:45:24.184160172Z" level=info msg="shim disconnected" id=4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d namespace=k8s.io Mar 12 00:45:24.184733 containerd[1719]: time="2026-03-12T00:45:24.184202452Z" level=warning msg="cleaning up after shim disconnected" id=4edaa2d4a8c4aef6cc23fa536b8e733a42d19c883a0e5f56d827560473cc3a1d namespace=k8s.io Mar 12 00:45:24.184733 containerd[1719]: time="2026-03-12T00:45:24.184212172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 00:45:24.199441 systemd[1]: Created slice kubepods-burstable-podbb4b0ced_686e_4023_85d0_51713ff7caee.slice - libcontainer container kubepods-burstable-podbb4b0ced_686e_4023_85d0_51713ff7caee.slice. Mar 12 00:45:24.214743 systemd[1]: Created slice kubepods-besteffort-pod2b918b6d_d1dc_41b7_9960_5c26c50358cf.slice - libcontainer container kubepods-besteffort-pod2b918b6d_d1dc_41b7_9960_5c26c50358cf.slice. Mar 12 00:45:24.229751 systemd[1]: Created slice kubepods-besteffort-pod4baa34da_a95a_4849_9b92_ba607bae2503.slice - libcontainer container kubepods-besteffort-pod4baa34da_a95a_4849_9b92_ba607bae2503.slice. Mar 12 00:45:24.248254 systemd[1]: Created slice kubepods-besteffort-podf4641c63_b65a_46ce_b0bc_583db4549b9d.slice - libcontainer container kubepods-besteffort-podf4641c63_b65a_46ce_b0bc_583db4549b9d.slice. Mar 12 00:45:24.257216 systemd[1]: Created slice kubepods-besteffort-pod0562416e_6f3d_4639_bbce_d4ae1ac939e1.slice - libcontainer container kubepods-besteffort-pod0562416e_6f3d_4639_bbce_d4ae1ac939e1.slice. Mar 12 00:45:24.272506 systemd[1]: Created slice kubepods-besteffort-podda3b3329_2fdd_41d7_bb40_059c94905ea3.slice - libcontainer container kubepods-besteffort-podda3b3329_2fdd_41d7_bb40_059c94905ea3.slice. Mar 12 00:45:24.280523 kubelet[3156]: I0312 00:45:24.280047 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da3b3329-2fdd-41d7-bb40-059c94905ea3-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-rpsht\" (UID: \"da3b3329-2fdd-41d7-bb40-059c94905ea3\") " pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.280523 kubelet[3156]: I0312 00:45:24.280088 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/da3b3329-2fdd-41d7-bb40-059c94905ea3-goldmane-key-pair\") pod \"goldmane-5b85766d88-rpsht\" (UID: \"da3b3329-2fdd-41d7-bb40-059c94905ea3\") " pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.280523 kubelet[3156]: I0312 00:45:24.280123 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nv8h\" (UniqueName: \"kubernetes.io/projected/0562416e-6f3d-4639-bbce-d4ae1ac939e1-kube-api-access-8nv8h\") pod \"calico-apiserver-784f7866bd-ccwm9\" (UID: \"0562416e-6f3d-4639-bbce-d4ae1ac939e1\") " pod="calico-system/calico-apiserver-784f7866bd-ccwm9" Mar 12 00:45:24.280523 kubelet[3156]: I0312 00:45:24.280141 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzjdv\" (UniqueName: \"kubernetes.io/projected/da3b3329-2fdd-41d7-bb40-059c94905ea3-kube-api-access-mzjdv\") pod \"goldmane-5b85766d88-rpsht\" (UID: \"da3b3329-2fdd-41d7-bb40-059c94905ea3\") " pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.280523 kubelet[3156]: I0312 00:45:24.280157 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2q6z\" (UniqueName: \"kubernetes.io/projected/f4641c63-b65a-46ce-b0bc-583db4549b9d-kube-api-access-z2q6z\") pod \"calico-apiserver-784f7866bd-dcrfg\" (UID: \"f4641c63-b65a-46ce-b0bc-583db4549b9d\") " pod="calico-system/calico-apiserver-784f7866bd-dcrfg" Mar 12 00:45:24.280865 kubelet[3156]: I0312 00:45:24.280176 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da3b3329-2fdd-41d7-bb40-059c94905ea3-config\") pod \"goldmane-5b85766d88-rpsht\" (UID: \"da3b3329-2fdd-41d7-bb40-059c94905ea3\") " pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.280865 kubelet[3156]: I0312 00:45:24.280193 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-nginx-config\") pod \"whisker-f56947b46-d2zh8\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.280865 kubelet[3156]: I0312 00:45:24.280207 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4641c63-b65a-46ce-b0bc-583db4549b9d-calico-apiserver-certs\") pod \"calico-apiserver-784f7866bd-dcrfg\" (UID: \"f4641c63-b65a-46ce-b0bc-583db4549b9d\") " pod="calico-system/calico-apiserver-784f7866bd-dcrfg" Mar 12 00:45:24.280865 kubelet[3156]: I0312 00:45:24.280235 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b918b6d-d1dc-41b7-9960-5c26c50358cf-tigera-ca-bundle\") pod \"calico-kube-controllers-7f4957d78b-vxbpg\" (UID: \"2b918b6d-d1dc-41b7-9960-5c26c50358cf\") " pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" Mar 12 00:45:24.280865 kubelet[3156]: I0312 00:45:24.280252 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg89s\" (UniqueName: \"kubernetes.io/projected/2b918b6d-d1dc-41b7-9960-5c26c50358cf-kube-api-access-zg89s\") pod \"calico-kube-controllers-7f4957d78b-vxbpg\" (UID: \"2b918b6d-d1dc-41b7-9960-5c26c50358cf\") " pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" Mar 12 00:45:24.280977 kubelet[3156]: I0312 00:45:24.280267 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0562416e-6f3d-4639-bbce-d4ae1ac939e1-calico-apiserver-certs\") pod \"calico-apiserver-784f7866bd-ccwm9\" (UID: \"0562416e-6f3d-4639-bbce-d4ae1ac939e1\") " pod="calico-system/calico-apiserver-784f7866bd-ccwm9" Mar 12 00:45:24.280977 kubelet[3156]: I0312 00:45:24.280282 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-backend-key-pair\") pod \"whisker-f56947b46-d2zh8\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.280977 kubelet[3156]: I0312 00:45:24.280299 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkz2\" (UniqueName: \"kubernetes.io/projected/4baa34da-a95a-4849-9b92-ba607bae2503-kube-api-access-5tkz2\") pod \"whisker-f56947b46-d2zh8\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.280977 kubelet[3156]: I0312 00:45:24.280317 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-ca-bundle\") pod \"whisker-f56947b46-d2zh8\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.355288 containerd[1719]: time="2026-03-12T00:45:24.354855980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6th7z,Uid:41eb878a-d7e0-41f9-bfd8-96489d627e74,Namespace:kube-system,Attempt:0,}" Mar 12 00:45:24.357192 containerd[1719]: time="2026-03-12T00:45:24.357075582Z" level=error msg="Failed to destroy network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.357646 containerd[1719]: time="2026-03-12T00:45:24.357484342Z" level=error msg="encountered an error cleaning up failed sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.357646 containerd[1719]: time="2026-03-12T00:45:24.357533222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b7676,Uid:fe7afae4-b713-43a4-990f-e7b7f89a4386,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.359051 kubelet[3156]: E0312 00:45:24.357838 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.359335 kubelet[3156]: E0312 00:45:24.359117 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:24.359335 kubelet[3156]: E0312 00:45:24.359143 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b7676" Mar 12 00:45:24.359403 kubelet[3156]: E0312 00:45:24.359211 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b7676_calico-system(fe7afae4-b713-43a4-990f-e7b7f89a4386)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b7676_calico-system(fe7afae4-b713-43a4-990f-e7b7f89a4386)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:24.361184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018-shm.mount: Deactivated successfully. Mar 12 00:45:24.459214 containerd[1719]: time="2026-03-12T00:45:24.459155459Z" level=error msg="Failed to destroy network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.459499 containerd[1719]: time="2026-03-12T00:45:24.459472299Z" level=error msg="encountered an error cleaning up failed sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.459550 containerd[1719]: time="2026-03-12T00:45:24.459530459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6th7z,Uid:41eb878a-d7e0-41f9-bfd8-96489d627e74,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.459771 kubelet[3156]: E0312 00:45:24.459735 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.459842 kubelet[3156]: E0312 00:45:24.459792 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6th7z" Mar 12 00:45:24.459842 kubelet[3156]: E0312 00:45:24.459812 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6th7z" Mar 12 00:45:24.459930 kubelet[3156]: E0312 00:45:24.459863 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6th7z_kube-system(41eb878a-d7e0-41f9-bfd8-96489d627e74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6th7z_kube-system(41eb878a-d7e0-41f9-bfd8-96489d627e74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6th7z" podUID="41eb878a-d7e0-41f9-bfd8-96489d627e74" Mar 12 00:45:24.506578 containerd[1719]: time="2026-03-12T00:45:24.506539894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m9kwb,Uid:bb4b0ced-686e-4023-85d0-51713ff7caee,Namespace:kube-system,Attempt:0,}" Mar 12 00:45:24.519625 containerd[1719]: time="2026-03-12T00:45:24.519591624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4957d78b-vxbpg,Uid:2b918b6d-d1dc-41b7-9960-5c26c50358cf,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.529243 kubelet[3156]: I0312 00:45:24.529213 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:24.529892 containerd[1719]: time="2026-03-12T00:45:24.529859072Z" level=info msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" Mar 12 00:45:24.530649 containerd[1719]: time="2026-03-12T00:45:24.530618512Z" level=info msg="Ensure that sandbox d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5 in task-service has been cleanup successfully" Mar 12 00:45:24.534127 containerd[1719]: time="2026-03-12T00:45:24.534097515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f56947b46-d2zh8,Uid:4baa34da-a95a-4849-9b92-ba607bae2503,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.542406 kubelet[3156]: I0312 00:45:24.540941 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:24.542682 containerd[1719]: time="2026-03-12T00:45:24.542651481Z" level=info msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" Mar 12 00:45:24.560423 containerd[1719]: time="2026-03-12T00:45:24.559748854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-dcrfg,Uid:f4641c63-b65a-46ce-b0bc-583db4549b9d,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.567614 containerd[1719]: time="2026-03-12T00:45:24.567042740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-ccwm9,Uid:0562416e-6f3d-4639-bbce-d4ae1ac939e1,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.586705 containerd[1719]: time="2026-03-12T00:45:24.586672794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rpsht,Uid:da3b3329-2fdd-41d7-bb40-059c94905ea3,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:24.587111 containerd[1719]: time="2026-03-12T00:45:24.587087794Z" level=info msg="Ensure that sandbox b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018 in task-service has been cleanup successfully" Mar 12 00:45:24.602593 containerd[1719]: time="2026-03-12T00:45:24.602553886Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 00:45:24.604587 containerd[1719]: time="2026-03-12T00:45:24.603362207Z" level=error msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" failed" error="failed to destroy network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.604646 kubelet[3156]: E0312 00:45:24.603576 3156 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:24.604646 kubelet[3156]: E0312 00:45:24.603774 3156 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5"} Mar 12 00:45:24.604646 kubelet[3156]: E0312 00:45:24.603921 3156 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41eb878a-d7e0-41f9-bfd8-96489d627e74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 00:45:24.604646 kubelet[3156]: E0312 00:45:24.603952 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41eb878a-d7e0-41f9-bfd8-96489d627e74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6th7z" podUID="41eb878a-d7e0-41f9-bfd8-96489d627e74" Mar 12 00:45:24.636406 containerd[1719]: time="2026-03-12T00:45:24.636084631Z" level=error msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" failed" error="failed to destroy network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.636866 kubelet[3156]: E0312 00:45:24.636328 3156 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:24.636866 kubelet[3156]: E0312 00:45:24.636786 3156 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018"} Mar 12 00:45:24.637016 kubelet[3156]: E0312 00:45:24.636835 3156 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe7afae4-b713-43a4-990f-e7b7f89a4386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 00:45:24.637016 kubelet[3156]: E0312 00:45:24.636987 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe7afae4-b713-43a4-990f-e7b7f89a4386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b7676" podUID="fe7afae4-b713-43a4-990f-e7b7f89a4386" Mar 12 00:45:24.662128 containerd[1719]: time="2026-03-12T00:45:24.662015450Z" level=error msg="Failed to destroy network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.662335 containerd[1719]: time="2026-03-12T00:45:24.662307610Z" level=error msg="encountered an error cleaning up failed sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.662397 containerd[1719]: time="2026-03-12T00:45:24.662355570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m9kwb,Uid:bb4b0ced-686e-4023-85d0-51713ff7caee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.662601 kubelet[3156]: E0312 00:45:24.662559 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.662657 kubelet[3156]: E0312 00:45:24.662614 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m9kwb" Mar 12 00:45:24.662657 kubelet[3156]: E0312 00:45:24.662636 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m9kwb" Mar 12 00:45:24.662727 kubelet[3156]: E0312 00:45:24.662681 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m9kwb_kube-system(bb4b0ced-686e-4023-85d0-51713ff7caee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m9kwb_kube-system(bb4b0ced-686e-4023-85d0-51713ff7caee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m9kwb" podUID="bb4b0ced-686e-4023-85d0-51713ff7caee" Mar 12 00:45:24.690134 containerd[1719]: time="2026-03-12T00:45:24.690092591Z" level=error msg="Failed to destroy network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.691093 containerd[1719]: time="2026-03-12T00:45:24.690938832Z" level=error msg="encountered an error cleaning up failed sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.691093 containerd[1719]: time="2026-03-12T00:45:24.690988032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4957d78b-vxbpg,Uid:2b918b6d-d1dc-41b7-9960-5c26c50358cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.691221 kubelet[3156]: E0312 00:45:24.691186 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.691261 kubelet[3156]: E0312 00:45:24.691240 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" Mar 12 00:45:24.691287 kubelet[3156]: E0312 00:45:24.691261 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" Mar 12 00:45:24.691343 kubelet[3156]: E0312 00:45:24.691311 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f4957d78b-vxbpg_calico-system(2b918b6d-d1dc-41b7-9960-5c26c50358cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f4957d78b-vxbpg_calico-system(2b918b6d-d1dc-41b7-9960-5c26c50358cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" podUID="2b918b6d-d1dc-41b7-9960-5c26c50358cf" Mar 12 00:45:24.750494 containerd[1719]: time="2026-03-12T00:45:24.750272316Z" level=info msg="CreateContainer within sandbox \"9f648e1971929584c4063afcf8af0f7a1f240aff2e8fdf2d6310d600423c43a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8\"" Mar 12 00:45:24.753404 containerd[1719]: time="2026-03-12T00:45:24.753351238Z" level=info msg="StartContainer for \"99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8\"" Mar 12 00:45:24.821537 systemd[1]: Started cri-containerd-99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8.scope - libcontainer container 99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8. Mar 12 00:45:24.859814 containerd[1719]: time="2026-03-12T00:45:24.859543637Z" level=error msg="Failed to destroy network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.860687 containerd[1719]: time="2026-03-12T00:45:24.860659198Z" level=error msg="encountered an error cleaning up failed sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.861001 containerd[1719]: time="2026-03-12T00:45:24.860817638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rpsht,Uid:da3b3329-2fdd-41d7-bb40-059c94905ea3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.861788 kubelet[3156]: E0312 00:45:24.861746 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.861884 kubelet[3156]: E0312 00:45:24.861799 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.861884 kubelet[3156]: E0312 00:45:24.861819 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rpsht" Mar 12 00:45:24.861884 kubelet[3156]: E0312 00:45:24.861861 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-rpsht_calico-system(da3b3329-2fdd-41d7-bb40-059c94905ea3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-rpsht_calico-system(da3b3329-2fdd-41d7-bb40-059c94905ea3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-rpsht" podUID="da3b3329-2fdd-41d7-bb40-059c94905ea3" Mar 12 00:45:24.868042 containerd[1719]: time="2026-03-12T00:45:24.867991123Z" level=error msg="Failed to destroy network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.868606 containerd[1719]: time="2026-03-12T00:45:24.868508603Z" level=error msg="encountered an error cleaning up failed sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.869188 containerd[1719]: time="2026-03-12T00:45:24.869145244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f56947b46-d2zh8,Uid:4baa34da-a95a-4849-9b92-ba607bae2503,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.869612 kubelet[3156]: E0312 00:45:24.869472 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.869612 kubelet[3156]: E0312 00:45:24.869527 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.869612 kubelet[3156]: E0312 00:45:24.869548 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f56947b46-d2zh8" Mar 12 00:45:24.869752 kubelet[3156]: E0312 00:45:24.869592 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f56947b46-d2zh8_calico-system(4baa34da-a95a-4849-9b92-ba607bae2503)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f56947b46-d2zh8_calico-system(4baa34da-a95a-4849-9b92-ba607bae2503)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f56947b46-d2zh8" podUID="4baa34da-a95a-4849-9b92-ba607bae2503" Mar 12 00:45:24.877186 containerd[1719]: time="2026-03-12T00:45:24.875499889Z" level=error msg="Failed to destroy network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.877186 containerd[1719]: time="2026-03-12T00:45:24.875843969Z" level=error msg="encountered an error cleaning up failed sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.877186 containerd[1719]: time="2026-03-12T00:45:24.875889529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-ccwm9,Uid:0562416e-6f3d-4639-bbce-d4ae1ac939e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.877186 containerd[1719]: time="2026-03-12T00:45:24.876093849Z" level=error msg="Failed to destroy network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.877344 kubelet[3156]: E0312 00:45:24.876163 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.877344 kubelet[3156]: E0312 00:45:24.876227 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784f7866bd-ccwm9" Mar 12 00:45:24.877344 kubelet[3156]: E0312 00:45:24.876248 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784f7866bd-ccwm9" Mar 12 00:45:24.877459 kubelet[3156]: E0312 00:45:24.876370 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784f7866bd-ccwm9_calico-system(0562416e-6f3d-4639-bbce-d4ae1ac939e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784f7866bd-ccwm9_calico-system(0562416e-6f3d-4639-bbce-d4ae1ac939e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-784f7866bd-ccwm9" podUID="0562416e-6f3d-4639-bbce-d4ae1ac939e1" Mar 12 00:45:24.878191 containerd[1719]: time="2026-03-12T00:45:24.878154411Z" level=error msg="encountered an error cleaning up failed sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.878255 containerd[1719]: time="2026-03-12T00:45:24.878208531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-dcrfg,Uid:f4641c63-b65a-46ce-b0bc-583db4549b9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.878355 kubelet[3156]: E0312 00:45:24.878327 3156 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 00:45:24.878635 kubelet[3156]: E0312 00:45:24.878372 3156 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784f7866bd-dcrfg" Mar 12 00:45:24.878635 kubelet[3156]: E0312 00:45:24.878543 3156 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-784f7866bd-dcrfg" Mar 12 00:45:24.878728 kubelet[3156]: E0312 00:45:24.878642 3156 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784f7866bd-dcrfg_calico-system(f4641c63-b65a-46ce-b0bc-583db4549b9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784f7866bd-dcrfg_calico-system(f4641c63-b65a-46ce-b0bc-583db4549b9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-784f7866bd-dcrfg" podUID="f4641c63-b65a-46ce-b0bc-583db4549b9d" Mar 12 00:45:24.885763 containerd[1719]: time="2026-03-12T00:45:24.885723736Z" level=info msg="StartContainer for \"99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8\" returns successfully" Mar 12 00:45:25.328141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5-shm.mount: Deactivated successfully. Mar 12 00:45:25.543524 kubelet[3156]: I0312 00:45:25.543492 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:25.545609 containerd[1719]: time="2026-03-12T00:45:25.545146386Z" level=info msg="StopPodSandbox for \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\"" Mar 12 00:45:25.545609 containerd[1719]: time="2026-03-12T00:45:25.545310826Z" level=info msg="Ensure that sandbox fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4 in task-service has been cleanup successfully" Mar 12 00:45:25.546595 kubelet[3156]: I0312 00:45:25.546397 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:25.547197 containerd[1719]: time="2026-03-12T00:45:25.546860827Z" level=info msg="StopPodSandbox for \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\"" Mar 12 00:45:25.547197 containerd[1719]: time="2026-03-12T00:45:25.547007027Z" level=info msg="Ensure that sandbox 3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9 in task-service has been cleanup successfully" Mar 12 00:45:25.551977 kubelet[3156]: I0312 00:45:25.551951 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:25.552638 containerd[1719]: time="2026-03-12T00:45:25.552577991Z" level=info msg="StopPodSandbox for \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\"" Mar 12 00:45:25.552782 containerd[1719]: time="2026-03-12T00:45:25.552717952Z" level=info msg="Ensure that sandbox 511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb in task-service has been cleanup successfully" Mar 12 00:45:25.557678 kubelet[3156]: I0312 00:45:25.557650 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:25.559864 containerd[1719]: time="2026-03-12T00:45:25.559443597Z" level=info msg="StopPodSandbox for \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\"" Mar 12 00:45:25.564262 containerd[1719]: time="2026-03-12T00:45:25.561480878Z" level=info msg="Ensure that sandbox e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088 in task-service has been cleanup successfully" Mar 12 00:45:25.580474 kubelet[3156]: I0312 00:45:25.580358 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:25.582605 containerd[1719]: time="2026-03-12T00:45:25.582568014Z" level=info msg="StopPodSandbox for \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\"" Mar 12 00:45:25.583017 containerd[1719]: time="2026-03-12T00:45:25.582982614Z" level=info msg="Ensure that sandbox 1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f in task-service has been cleanup successfully" Mar 12 00:45:25.588831 kubelet[3156]: I0312 00:45:25.588801 3156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:25.590755 containerd[1719]: time="2026-03-12T00:45:25.590729380Z" level=info msg="StopPodSandbox for \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\"" Mar 12 00:45:25.591036 containerd[1719]: time="2026-03-12T00:45:25.591014820Z" level=info msg="Ensure that sandbox ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb in task-service has been cleanup successfully" Mar 12 00:45:25.612152 kubelet[3156]: I0312 00:45:25.612089 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tpfg4" podStartSLOduration=5.223599505 podStartE2EDuration="24.611687995s" podCreationTimestamp="2026-03-12 00:45:01 +0000 UTC" firstStartedPulling="2026-03-12 00:45:02.521504294 +0000 UTC m=+23.210592443" lastFinishedPulling="2026-03-12 00:45:21.909592744 +0000 UTC m=+42.598680933" observedRunningTime="2026-03-12 00:45:25.610278794 +0000 UTC m=+46.299366983" watchObservedRunningTime="2026-03-12 00:45:25.611687995 +0000 UTC m=+46.300776144" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.702 [INFO][4321] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.703 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" iface="eth0" netns="/var/run/netns/cni-7cef191a-3952-1ce1-077b-12dd433b5da2" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.704 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" iface="eth0" netns="/var/run/netns/cni-7cef191a-3952-1ce1-077b-12dd433b5da2" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.704 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" iface="eth0" netns="/var/run/netns/cni-7cef191a-3952-1ce1-077b-12dd433b5da2" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.704 [INFO][4321] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.704 [INFO][4321] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.788 [INFO][4404] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.790 [INFO][4404] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.790 [INFO][4404] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.831 [WARNING][4404] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.831 [INFO][4404] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.834 [INFO][4404] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.853538 containerd[1719]: 2026-03-12 00:45:25.841 [INFO][4321] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:25.856067 systemd[1]: run-netns-cni\x2d7cef191a\x2d3952\x2d1ce1\x2d077b\x2d12dd433b5da2.mount: Deactivated successfully. Mar 12 00:45:25.860505 containerd[1719]: time="2026-03-12T00:45:25.859742460Z" level=info msg="TearDown network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" successfully" Mar 12 00:45:25.860505 containerd[1719]: time="2026-03-12T00:45:25.859781260Z" level=info msg="StopPodSandbox for \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" returns successfully" Mar 12 00:45:25.868395 containerd[1719]: time="2026-03-12T00:45:25.868253226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4957d78b-vxbpg,Uid:2b918b6d-d1dc-41b7-9960-5c26c50358cf,Namespace:calico-system,Attempt:1,}" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" iface="eth0" netns="/var/run/netns/cni-6daab8fc-ac00-202b-ebf5-4f1a1490b90e" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" iface="eth0" netns="/var/run/netns/cni-6daab8fc-ac00-202b-ebf5-4f1a1490b90e" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" iface="eth0" netns="/var/run/netns/cni-6daab8fc-ac00-202b-ebf5-4f1a1490b90e" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.761 [INFO][4322] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.793 [INFO][4420] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.793 [INFO][4420] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.834 [INFO][4420] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.859 [WARNING][4420] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.859 [INFO][4420] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.868 [INFO][4420] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.892394 containerd[1719]: 2026-03-12 00:45:25.884 [INFO][4322] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:25.893400 containerd[1719]: time="2026-03-12T00:45:25.893052324Z" level=info msg="TearDown network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" successfully" Mar 12 00:45:25.893400 containerd[1719]: time="2026-03-12T00:45:25.893082524Z" level=info msg="StopPodSandbox for \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" returns successfully" Mar 12 00:45:25.895587 systemd[1]: run-netns-cni\x2d6daab8fc\x2dac00\x2d202b\x2debf5\x2d4f1a1490b90e.mount: Deactivated successfully. Mar 12 00:45:25.899654 containerd[1719]: time="2026-03-12T00:45:25.899178169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-dcrfg,Uid:f4641c63-b65a-46ce-b0bc-583db4549b9d,Namespace:calico-system,Attempt:1,}" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.817 [INFO][4350] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.817 [INFO][4350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" iface="eth0" netns="/var/run/netns/cni-32c0fa4d-f3a0-b6af-ce06-3c616ca68b92" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.818 [INFO][4350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" iface="eth0" netns="/var/run/netns/cni-32c0fa4d-f3a0-b6af-ce06-3c616ca68b92" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.819 [INFO][4350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" iface="eth0" netns="/var/run/netns/cni-32c0fa4d-f3a0-b6af-ce06-3c616ca68b92" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.819 [INFO][4350] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.819 [INFO][4350] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.881 [INFO][4442] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.881 [INFO][4442] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.881 [INFO][4442] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.899 [WARNING][4442] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.899 [INFO][4442] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.901 [INFO][4442] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.916503 containerd[1719]: 2026-03-12 00:45:25.906 [INFO][4350] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:25.918168 containerd[1719]: time="2026-03-12T00:45:25.918138423Z" level=info msg="TearDown network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" successfully" Mar 12 00:45:25.918324 containerd[1719]: time="2026-03-12T00:45:25.918308103Z" level=info msg="StopPodSandbox for \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" returns successfully" Mar 12 00:45:25.919517 containerd[1719]: time="2026-03-12T00:45:25.919490704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-ccwm9,Uid:0562416e-6f3d-4639-bbce-d4ae1ac939e1,Namespace:calico-system,Attempt:1,}" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.794 [INFO][4374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.794 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" iface="eth0" netns="/var/run/netns/cni-7d1eb597-f33b-0be8-0052-20722cf959ac" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.795 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" iface="eth0" netns="/var/run/netns/cni-7d1eb597-f33b-0be8-0052-20722cf959ac" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.798 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" iface="eth0" netns="/var/run/netns/cni-7d1eb597-f33b-0be8-0052-20722cf959ac" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.798 [INFO][4374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.798 [INFO][4374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.886 [INFO][4434] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.887 [INFO][4434] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.902 [INFO][4434] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.921 [WARNING][4434] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.922 [INFO][4434] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.925 [INFO][4434] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.933133 containerd[1719]: 2026-03-12 00:45:25.929 [INFO][4374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:25.933781 containerd[1719]: time="2026-03-12T00:45:25.933195874Z" level=info msg="TearDown network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" successfully" Mar 12 00:45:25.933781 containerd[1719]: time="2026-03-12T00:45:25.933215554Z" level=info msg="StopPodSandbox for \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" returns successfully" Mar 12 00:45:25.934402 containerd[1719]: time="2026-03-12T00:45:25.934002315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rpsht,Uid:da3b3329-2fdd-41d7-bb40-059c94905ea3,Namespace:calico-system,Attempt:1,}" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.796 [INFO][4348] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.797 [INFO][4348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" iface="eth0" netns="/var/run/netns/cni-b2ed7ba3-04e8-5b64-3d12-6db2066f53f2" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.797 [INFO][4348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" iface="eth0" netns="/var/run/netns/cni-b2ed7ba3-04e8-5b64-3d12-6db2066f53f2" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.797 [INFO][4348] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" iface="eth0" netns="/var/run/netns/cni-b2ed7ba3-04e8-5b64-3d12-6db2066f53f2" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.797 [INFO][4348] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.797 [INFO][4348] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.911 [INFO][4433] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.911 [INFO][4433] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.925 [INFO][4433] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.937 [WARNING][4433] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.941 [INFO][4433] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.942 [INFO][4433] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.947640 containerd[1719]: 2026-03-12 00:45:25.945 [INFO][4348] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:25.948269 containerd[1719]: time="2026-03-12T00:45:25.947749725Z" level=info msg="TearDown network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" successfully" Mar 12 00:45:25.948269 containerd[1719]: time="2026-03-12T00:45:25.947771245Z" level=info msg="StopPodSandbox for \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" returns successfully" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.839 [INFO][4375] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.842 [INFO][4375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" iface="eth0" netns="/var/run/netns/cni-402973ca-003e-18b7-dc64-291c4922b8be" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.844 [INFO][4375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" iface="eth0" netns="/var/run/netns/cni-402973ca-003e-18b7-dc64-291c4922b8be" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.845 [INFO][4375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" iface="eth0" netns="/var/run/netns/cni-402973ca-003e-18b7-dc64-291c4922b8be" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.845 [INFO][4375] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.846 [INFO][4375] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.937 [INFO][4447] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.941 [INFO][4447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.942 [INFO][4447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.955 [WARNING][4447] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.955 [INFO][4447] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.957 [INFO][4447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:25.961437 containerd[1719]: 2026-03-12 00:45:25.959 [INFO][4375] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:25.961437 containerd[1719]: time="2026-03-12T00:45:25.961338855Z" level=info msg="TearDown network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" successfully" Mar 12 00:45:25.961437 containerd[1719]: time="2026-03-12T00:45:25.961359415Z" level=info msg="StopPodSandbox for \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" returns successfully" Mar 12 00:45:25.961927 containerd[1719]: time="2026-03-12T00:45:25.961869655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m9kwb,Uid:bb4b0ced-686e-4023-85d0-51713ff7caee,Namespace:kube-system,Attempt:1,}" Mar 12 00:45:25.995552 kubelet[3156]: I0312 00:45:25.995455 3156 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-backend-key-pair\") pod \"4baa34da-a95a-4849-9b92-ba607bae2503\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " Mar 12 00:45:25.995552 kubelet[3156]: I0312 00:45:25.995517 3156 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-ca-bundle\") pod \"4baa34da-a95a-4849-9b92-ba607bae2503\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " Mar 12 00:45:25.995552 kubelet[3156]: I0312 00:45:25.995537 3156 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkz2\" (UniqueName: \"kubernetes.io/projected/4baa34da-a95a-4849-9b92-ba607bae2503-kube-api-access-5tkz2\") pod \"4baa34da-a95a-4849-9b92-ba607bae2503\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " Mar 12 00:45:25.995552 kubelet[3156]: I0312 00:45:25.995562 3156 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-nginx-config\") pod \"4baa34da-a95a-4849-9b92-ba607bae2503\" (UID: \"4baa34da-a95a-4849-9b92-ba607bae2503\") " Mar 12 00:45:25.996007 kubelet[3156]: I0312 00:45:25.995883 3156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4baa34da-a95a-4849-9b92-ba607bae2503" (UID: "4baa34da-a95a-4849-9b92-ba607bae2503"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 00:45:25.999698 kubelet[3156]: I0312 00:45:25.999505 3156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4baa34da-a95a-4849-9b92-ba607bae2503" (UID: "4baa34da-a95a-4849-9b92-ba607bae2503"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 00:45:26.001228 kubelet[3156]: I0312 00:45:26.001205 3156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4baa34da-a95a-4849-9b92-ba607bae2503-kube-api-access-5tkz2" (OuterVolumeSpecName: "kube-api-access-5tkz2") pod "4baa34da-a95a-4849-9b92-ba607bae2503" (UID: "4baa34da-a95a-4849-9b92-ba607bae2503"). InnerVolumeSpecName "kube-api-access-5tkz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 00:45:26.003547 kubelet[3156]: I0312 00:45:26.003507 3156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4baa34da-a95a-4849-9b92-ba607bae2503" (UID: "4baa34da-a95a-4849-9b92-ba607bae2503"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 00:45:26.097221 kubelet[3156]: I0312 00:45:26.096391 3156 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-ca-bundle\") on node \"ci-4081.3.6-n-d10d02cd33\" DevicePath \"\"" Mar 12 00:45:26.097221 kubelet[3156]: I0312 00:45:26.097164 3156 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tkz2\" (UniqueName: \"kubernetes.io/projected/4baa34da-a95a-4849-9b92-ba607bae2503-kube-api-access-5tkz2\") on node \"ci-4081.3.6-n-d10d02cd33\" DevicePath \"\"" Mar 12 00:45:26.097221 kubelet[3156]: I0312 00:45:26.097183 3156 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4baa34da-a95a-4849-9b92-ba607bae2503-nginx-config\") on node \"ci-4081.3.6-n-d10d02cd33\" DevicePath \"\"" Mar 12 00:45:26.097221 kubelet[3156]: I0312 00:45:26.097193 3156 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4baa34da-a95a-4849-9b92-ba607bae2503-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-d10d02cd33\" DevicePath \"\"" Mar 12 00:45:26.332321 systemd[1]: run-netns-cni\x2d7d1eb597\x2df33b\x2d0be8\x2d0052\x2d20722cf959ac.mount: Deactivated successfully. Mar 12 00:45:26.332424 systemd[1]: run-netns-cni\x2d32c0fa4d\x2df3a0\x2db6af\x2dce06\x2d3c616ca68b92.mount: Deactivated successfully. Mar 12 00:45:26.332470 systemd[1]: run-netns-cni\x2db2ed7ba3\x2d04e8\x2d5b64\x2d3d12\x2d6db2066f53f2.mount: Deactivated successfully. Mar 12 00:45:26.332512 systemd[1]: run-netns-cni\x2d402973ca\x2d003e\x2d18b7\x2ddc64\x2d291c4922b8be.mount: Deactivated successfully. Mar 12 00:45:26.332556 systemd[1]: var-lib-kubelet-pods-4baa34da\x2da95a\x2d4849\x2d9b92\x2dba607bae2503-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tkz2.mount: Deactivated successfully. Mar 12 00:45:26.332605 systemd[1]: var-lib-kubelet-pods-4baa34da\x2da95a\x2d4849\x2d9b92\x2dba607bae2503-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 00:45:26.341738 systemd-networkd[1360]: calid93add5251f: Link UP Mar 12 00:45:26.342491 systemd-networkd[1360]: calid93add5251f: Gained carrier Mar 12 00:45:26.343598 systemd-networkd[1360]: cali43f75afcfeb: Link UP Mar 12 00:45:26.346840 systemd-networkd[1360]: cali43f75afcfeb: Gained carrier Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.053 [ERROR][4471] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.075 [INFO][4471] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0 calico-apiserver-784f7866bd- calico-system f4641c63-b65a-46ce-b0bc-583db4549b9d 935 0 2026-03-12 00:45:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784f7866bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 calico-apiserver-784f7866bd-dcrfg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali43f75afcfeb [] [] }} ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.075 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.165 [INFO][4518] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" HandleID="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.192 [INFO][4518] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" HandleID="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"calico-apiserver-784f7866bd-dcrfg", "timestamp":"2026-03-12 00:45:26.165240966 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.192 [INFO][4518] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.247 [INFO][4518] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.247 [INFO][4518] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.290 [INFO][4518] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.295 [INFO][4518] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.299 [INFO][4518] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.301 [INFO][4518] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.303 [INFO][4518] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.303 [INFO][4518] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.304 [INFO][4518] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.310 [INFO][4518] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4518] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.194/26] block=192.168.20.192/26 handle="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4518] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.194/26] handle="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4518] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:26.365532 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4518] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.194/26] IPv6=[] ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" HandleID="k8s-pod-network.ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.336 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"f4641c63-b65a-46ce-b0bc-583db4549b9d", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"calico-apiserver-784f7866bd-dcrfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f75afcfeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.336 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.194/32] ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.336 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43f75afcfeb ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.347 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.348 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"f4641c63-b65a-46ce-b0bc-583db4549b9d", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c", Pod:"calico-apiserver-784f7866bd-dcrfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f75afcfeb", MAC:"d6:56:35:a4:a3:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.366129 containerd[1719]: 2026-03-12 00:45:26.361 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-dcrfg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.022 [ERROR][4460] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.041 [INFO][4460] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0 calico-kube-controllers-7f4957d78b- calico-system 2b918b6d-d1dc-41b7-9960-5c26c50358cf 934 0 2026-03-12 00:45:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f4957d78b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 calico-kube-controllers-7f4957d78b-vxbpg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid93add5251f [] [] }} ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.041 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.127 [INFO][4491] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" HandleID="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.158 [INFO][4491] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" HandleID="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"calico-kube-controllers-7f4957d78b-vxbpg", "timestamp":"2026-03-12 00:45:26.127896099 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186c60)} Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.158 [INFO][4491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.158 [INFO][4491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.158 [INFO][4491] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.161 [INFO][4491] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.187 [INFO][4491] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.197 [INFO][4491] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.203 [INFO][4491] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.212 [INFO][4491] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.212 [INFO][4491] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.221 [INFO][4491] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62 Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.232 [INFO][4491] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.246 [INFO][4491] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.193/26] block=192.168.20.192/26 handle="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.246 [INFO][4491] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.193/26] handle="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.246 [INFO][4491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:26.374132 containerd[1719]: 2026-03-12 00:45:26.247 [INFO][4491] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.193/26] IPv6=[] ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" HandleID="k8s-pod-network.8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.251 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0", GenerateName:"calico-kube-controllers-7f4957d78b-", Namespace:"calico-system", SelfLink:"", UID:"2b918b6d-d1dc-41b7-9960-5c26c50358cf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4957d78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"calico-kube-controllers-7f4957d78b-vxbpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid93add5251f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.251 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.193/32] ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.251 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid93add5251f ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.345 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.349 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0", GenerateName:"calico-kube-controllers-7f4957d78b-", Namespace:"calico-system", SelfLink:"", UID:"2b918b6d-d1dc-41b7-9960-5c26c50358cf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4957d78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62", Pod:"calico-kube-controllers-7f4957d78b-vxbpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid93add5251f", MAC:"fa:f6:07:22:b7:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.375050 containerd[1719]: 2026-03-12 00:45:26.368 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62" Namespace="calico-system" Pod="calico-kube-controllers-7f4957d78b-vxbpg" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:26.447632 containerd[1719]: time="2026-03-12T00:45:26.447509616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:26.447632 containerd[1719]: time="2026-03-12T00:45:26.447572256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:26.447632 containerd[1719]: time="2026-03-12T00:45:26.447594136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.447821 containerd[1719]: time="2026-03-12T00:45:26.447673656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.453493 containerd[1719]: time="2026-03-12T00:45:26.453268620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:26.453493 containerd[1719]: time="2026-03-12T00:45:26.453321980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:26.453493 containerd[1719]: time="2026-03-12T00:45:26.453333260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.453760 containerd[1719]: time="2026-03-12T00:45:26.453478301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.495610 systemd[1]: Started cri-containerd-8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62.scope - libcontainer container 8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62. Mar 12 00:45:26.502537 systemd-networkd[1360]: cali256c3133b67: Link UP Mar 12 00:45:26.502736 systemd-networkd[1360]: cali256c3133b67: Gained carrier Mar 12 00:45:26.517476 systemd[1]: Started cri-containerd-ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c.scope - libcontainer container ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c. Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.127 [ERROR][4507] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.153 [INFO][4507] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0 coredns-674b8bbfcf- kube-system bb4b0ced-686e-4023-85d0-51713ff7caee 939 0 2026-03-12 00:44:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 coredns-674b8bbfcf-m9kwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali256c3133b67 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.154 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.194 [INFO][4545] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" HandleID="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.210 [INFO][4545] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" HandleID="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"coredns-674b8bbfcf-m9kwb", "timestamp":"2026-03-12 00:45:26.194227068 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024ef20)} Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.210 [INFO][4545] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4545] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.328 [INFO][4545] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.395 [INFO][4545] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.405 [INFO][4545] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.438 [INFO][4545] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.443 [INFO][4545] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.448 [INFO][4545] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.448 [INFO][4545] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.454 [INFO][4545] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.464 [INFO][4545] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.488 [INFO][4545] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.195/26] block=192.168.20.192/26 handle="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.489 [INFO][4545] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.195/26] handle="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.489 [INFO][4545] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:26.537552 containerd[1719]: 2026-03-12 00:45:26.489 [INFO][4545] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.195/26] IPv6=[] ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" HandleID="k8s-pod-network.46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.496 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bb4b0ced-686e-4023-85d0-51713ff7caee", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"coredns-674b8bbfcf-m9kwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256c3133b67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.497 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.195/32] ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.497 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali256c3133b67 ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.506 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.507 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bb4b0ced-686e-4023-85d0-51713ff7caee", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d", Pod:"coredns-674b8bbfcf-m9kwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256c3133b67", MAC:"56:f4:8d:ef:c3:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.538604 containerd[1719]: 2026-03-12 00:45:26.534 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d" Namespace="kube-system" Pod="coredns-674b8bbfcf-m9kwb" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:26.584479 systemd-networkd[1360]: calibdc0c100907: Link UP Mar 12 00:45:26.584649 systemd-networkd[1360]: calibdc0c100907: Gained carrier Mar 12 00:45:26.613236 containerd[1719]: time="2026-03-12T00:45:26.612869059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:26.613236 containerd[1719]: time="2026-03-12T00:45:26.612926779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:26.613236 containerd[1719]: time="2026-03-12T00:45:26.612937979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.613236 containerd[1719]: time="2026-03-12T00:45:26.613013019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.617552 systemd[1]: Removed slice kubepods-besteffort-pod4baa34da_a95a_4849_9b92_ba607bae2503.slice - libcontainer container kubepods-besteffort-pod4baa34da_a95a_4849_9b92_ba607bae2503.slice. Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.098 [ERROR][4481] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.124 [INFO][4481] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0 calico-apiserver-784f7866bd- calico-system 0562416e-6f3d-4639-bbce-d4ae1ac939e1 938 0 2026-03-12 00:45:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784f7866bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 calico-apiserver-784f7866bd-ccwm9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibdc0c100907 [] [] }} ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.125 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.227 [INFO][4530] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" HandleID="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.250 [INFO][4530] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" HandleID="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fa190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"calico-apiserver-784f7866bd-ccwm9", "timestamp":"2026-03-12 00:45:26.227938653 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018cdc0)} Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.250 [INFO][4530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.489 [INFO][4530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.489 [INFO][4530] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.493 [INFO][4530] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.508 [INFO][4530] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.524 [INFO][4530] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.526 [INFO][4530] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.533 [INFO][4530] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.534 [INFO][4530] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.547 [INFO][4530] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3 Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.556 [INFO][4530] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.569 [INFO][4530] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.196/26] block=192.168.20.192/26 handle="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.569 [INFO][4530] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.196/26] handle="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.569 [INFO][4530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:26.623270 containerd[1719]: 2026-03-12 00:45:26.569 [INFO][4530] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.196/26] IPv6=[] ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" HandleID="k8s-pod-network.6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.578 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"0562416e-6f3d-4639-bbce-d4ae1ac939e1", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"calico-apiserver-784f7866bd-ccwm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdc0c100907", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.578 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.196/32] ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.578 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdc0c100907 ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.583 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.586 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"0562416e-6f3d-4639-bbce-d4ae1ac939e1", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3", Pod:"calico-apiserver-784f7866bd-ccwm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdc0c100907", MAC:"f2:72:8f:a4:0f:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.623800 containerd[1719]: 2026-03-12 00:45:26.606 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3" Namespace="calico-system" Pod="calico-apiserver-784f7866bd-ccwm9" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:26.682712 containerd[1719]: time="2026-03-12T00:45:26.682625951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:26.683158 containerd[1719]: time="2026-03-12T00:45:26.683101151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:26.683632 containerd[1719]: time="2026-03-12T00:45:26.683308551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.683963 containerd[1719]: time="2026-03-12T00:45:26.683851512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.688329 systemd[1]: Started cri-containerd-46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d.scope - libcontainer container 46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d. Mar 12 00:45:26.719563 systemd[1]: Started cri-containerd-6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3.scope - libcontainer container 6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3. Mar 12 00:45:26.732486 systemd-networkd[1360]: calib5bf77b6444: Link UP Mar 12 00:45:26.733610 systemd-networkd[1360]: calib5bf77b6444: Gained carrier Mar 12 00:45:26.763328 containerd[1719]: time="2026-03-12T00:45:26.763216931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-dcrfg,Uid:f4641c63-b65a-46ce-b0bc-583db4549b9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c\"" Mar 12 00:45:26.767833 containerd[1719]: time="2026-03-12T00:45:26.767285854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.109 [ERROR][4496] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.140 [INFO][4496] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0 goldmane-5b85766d88- calico-system da3b3329-2fdd-41d7-bb40-059c94905ea3 936 0 2026-03-12 00:45:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 goldmane-5b85766d88-rpsht eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib5bf77b6444 [] [] }} ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.140 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.242 [INFO][4539] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" HandleID="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.257 [INFO][4539] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" HandleID="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"goldmane-5b85766d88-rpsht", "timestamp":"2026-03-12 00:45:26.242970984 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.257 [INFO][4539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.569 [INFO][4539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.570 [INFO][4539] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.606 [INFO][4539] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.624 [INFO][4539] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.631 [INFO][4539] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.637 [INFO][4539] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.655 [INFO][4539] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.656 [INFO][4539] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.665 [INFO][4539] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.680 [INFO][4539] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.711 [INFO][4539] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.197/26] block=192.168.20.192/26 handle="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.712 [INFO][4539] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.197/26] handle="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.712 [INFO][4539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:26.770039 containerd[1719]: 2026-03-12 00:45:26.713 [INFO][4539] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.197/26] IPv6=[] ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" HandleID="k8s-pod-network.cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.724 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"da3b3329-2fdd-41d7-bb40-059c94905ea3", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"goldmane-5b85766d88-rpsht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib5bf77b6444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.724 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.197/32] ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.724 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5bf77b6444 ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.735 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.736 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"da3b3329-2fdd-41d7-bb40-059c94905ea3", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc", Pod:"goldmane-5b85766d88-rpsht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib5bf77b6444", MAC:"7e:15:5f:c9:ac:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:26.771030 containerd[1719]: 2026-03-12 00:45:26.767 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc" Namespace="calico-system" Pod="goldmane-5b85766d88-rpsht" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:26.821136 systemd[1]: Created slice kubepods-besteffort-pod471da856_96fd_4ee1_9eab_59a93060e33d.slice - libcontainer container kubepods-besteffort-pod471da856_96fd_4ee1_9eab_59a93060e33d.slice. Mar 12 00:45:26.851693 containerd[1719]: time="2026-03-12T00:45:26.847004673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:26.851693 containerd[1719]: time="2026-03-12T00:45:26.847051313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:26.851693 containerd[1719]: time="2026-03-12T00:45:26.847075833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.851693 containerd[1719]: time="2026-03-12T00:45:26.847197953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:26.866358 systemd[1]: Started cri-containerd-cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc.scope - libcontainer container cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc. Mar 12 00:45:26.896482 containerd[1719]: time="2026-03-12T00:45:26.896438030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4957d78b-vxbpg,Uid:2b918b6d-d1dc-41b7-9960-5c26c50358cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62\"" Mar 12 00:45:26.902499 kubelet[3156]: I0312 00:45:26.902472 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471da856-96fd-4ee1-9eab-59a93060e33d-whisker-ca-bundle\") pod \"whisker-747bbc7567-c8fnh\" (UID: \"471da856-96fd-4ee1-9eab-59a93060e33d\") " pod="calico-system/whisker-747bbc7567-c8fnh" Mar 12 00:45:26.903389 kubelet[3156]: I0312 00:45:26.903103 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/471da856-96fd-4ee1-9eab-59a93060e33d-nginx-config\") pod \"whisker-747bbc7567-c8fnh\" (UID: \"471da856-96fd-4ee1-9eab-59a93060e33d\") " pod="calico-system/whisker-747bbc7567-c8fnh" Mar 12 00:45:26.903389 kubelet[3156]: I0312 00:45:26.903133 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcn8\" (UniqueName: \"kubernetes.io/projected/471da856-96fd-4ee1-9eab-59a93060e33d-kube-api-access-9tcn8\") pod \"whisker-747bbc7567-c8fnh\" (UID: \"471da856-96fd-4ee1-9eab-59a93060e33d\") " pod="calico-system/whisker-747bbc7567-c8fnh" Mar 12 00:45:26.903389 kubelet[3156]: I0312 00:45:26.903213 3156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/471da856-96fd-4ee1-9eab-59a93060e33d-whisker-backend-key-pair\") pod \"whisker-747bbc7567-c8fnh\" (UID: \"471da856-96fd-4ee1-9eab-59a93060e33d\") " pod="calico-system/whisker-747bbc7567-c8fnh" Mar 12 00:45:26.904653 containerd[1719]: time="2026-03-12T00:45:26.904235355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m9kwb,Uid:bb4b0ced-686e-4023-85d0-51713ff7caee,Namespace:kube-system,Attempt:1,} returns sandbox id \"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d\"" Mar 12 00:45:26.917066 containerd[1719]: time="2026-03-12T00:45:26.916888605Z" level=info msg="CreateContainer within sandbox \"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 00:45:26.951454 containerd[1719]: time="2026-03-12T00:45:26.951258950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784f7866bd-ccwm9,Uid:0562416e-6f3d-4639-bbce-d4ae1ac939e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3\"" Mar 12 00:45:26.962125 containerd[1719]: time="2026-03-12T00:45:26.962006838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rpsht,Uid:da3b3329-2fdd-41d7-bb40-059c94905ea3,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc\"" Mar 12 00:45:26.979789 containerd[1719]: time="2026-03-12T00:45:26.979751211Z" level=info msg="CreateContainer within sandbox \"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78d9cda627ab5c3be625c91c423399436473ddd4775b97e0fdc17a308a878a1b\"" Mar 12 00:45:26.981342 containerd[1719]: time="2026-03-12T00:45:26.980530492Z" level=info msg="StartContainer for \"78d9cda627ab5c3be625c91c423399436473ddd4775b97e0fdc17a308a878a1b\"" Mar 12 00:45:27.032541 systemd[1]: Started cri-containerd-78d9cda627ab5c3be625c91c423399436473ddd4775b97e0fdc17a308a878a1b.scope - libcontainer container 78d9cda627ab5c3be625c91c423399436473ddd4775b97e0fdc17a308a878a1b. Mar 12 00:45:27.072697 containerd[1719]: time="2026-03-12T00:45:27.072498240Z" level=info msg="StartContainer for \"78d9cda627ab5c3be625c91c423399436473ddd4775b97e0fdc17a308a878a1b\" returns successfully" Mar 12 00:45:27.125193 containerd[1719]: time="2026-03-12T00:45:27.124366759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747bbc7567-c8fnh,Uid:471da856-96fd-4ee1-9eab-59a93060e33d,Namespace:calico-system,Attempt:0,}" Mar 12 00:45:27.330650 systemd[1]: run-containerd-runc-k8s.io-ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c-runc.zbPZqe.mount: Deactivated successfully. Mar 12 00:45:27.334398 kernel: calico-node[4576]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 00:45:27.352065 systemd-networkd[1360]: calie3e5d9c8029: Link UP Mar 12 00:45:27.353420 systemd-networkd[1360]: calie3e5d9c8029: Gained carrier Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.251 [INFO][4970] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0 whisker-747bbc7567- calico-system 471da856-96fd-4ee1-9eab-59a93060e33d 973 0 2026-03-12 00:45:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:747bbc7567 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 whisker-747bbc7567-c8fnh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie3e5d9c8029 [] [] }} ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.251 [INFO][4970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.289 [INFO][4988] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" HandleID="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.298 [INFO][4988] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" HandleID="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb570), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"whisker-747bbc7567-c8fnh", "timestamp":"2026-03-12 00:45:27.289304121 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400048af20)} Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.298 [INFO][4988] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.298 [INFO][4988] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.298 [INFO][4988] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.301 [INFO][4988] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.306 [INFO][4988] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.312 [INFO][4988] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.313 [INFO][4988] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.315 [INFO][4988] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.315 [INFO][4988] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.317 [INFO][4988] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429 Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.327 [INFO][4988] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.341 [INFO][4988] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.198/26] block=192.168.20.192/26 handle="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.341 [INFO][4988] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.198/26] handle="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.341 [INFO][4988] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:27.391560 containerd[1719]: 2026-03-12 00:45:27.341 [INFO][4988] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.198/26] IPv6=[] ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" HandleID="k8s-pod-network.ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.344 [INFO][4970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0", GenerateName:"whisker-747bbc7567-", Namespace:"calico-system", SelfLink:"", UID:"471da856-96fd-4ee1-9eab-59a93060e33d", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"747bbc7567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"whisker-747bbc7567-c8fnh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3e5d9c8029", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.344 [INFO][4970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.198/32] ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.344 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3e5d9c8029 ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.354 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.354 [INFO][4970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0", GenerateName:"whisker-747bbc7567-", Namespace:"calico-system", SelfLink:"", UID:"471da856-96fd-4ee1-9eab-59a93060e33d", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"747bbc7567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429", Pod:"whisker-747bbc7567-c8fnh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie3e5d9c8029", MAC:"92:87:f7:58:a7:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:27.392069 containerd[1719]: 2026-03-12 00:45:27.388 [INFO][4970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429" Namespace="calico-system" Pod="whisker-747bbc7567-c8fnh" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--747bbc7567--c8fnh-eth0" Mar 12 00:45:27.427251 containerd[1719]: time="2026-03-12T00:45:27.426979904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:27.427251 containerd[1719]: time="2026-03-12T00:45:27.427068864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:27.427251 containerd[1719]: time="2026-03-12T00:45:27.427082544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:27.427251 containerd[1719]: time="2026-03-12T00:45:27.427164544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:27.455323 systemd[1]: run-containerd-runc-k8s.io-ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429-runc.p4OPD0.mount: Deactivated successfully. Mar 12 00:45:27.463533 kubelet[3156]: I0312 00:45:27.463112 3156 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4baa34da-a95a-4849-9b92-ba607bae2503" path="/var/lib/kubelet/pods/4baa34da-a95a-4849-9b92-ba607bae2503/volumes" Mar 12 00:45:27.463571 systemd[1]: Started cri-containerd-ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429.scope - libcontainer container ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429. Mar 12 00:45:27.499470 containerd[1719]: time="2026-03-12T00:45:27.499340997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747bbc7567-c8fnh,Uid:471da856-96fd-4ee1-9eab-59a93060e33d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429\"" Mar 12 00:45:27.773775 systemd-networkd[1360]: calid93add5251f: Gained IPv6LL Mar 12 00:45:27.848911 systemd-networkd[1360]: vxlan.calico: Link UP Mar 12 00:45:27.848922 systemd-networkd[1360]: vxlan.calico: Gained carrier Mar 12 00:45:28.222453 systemd-networkd[1360]: cali43f75afcfeb: Gained IPv6LL Mar 12 00:45:28.222705 systemd-networkd[1360]: calibdc0c100907: Gained IPv6LL Mar 12 00:45:28.286593 systemd-networkd[1360]: cali256c3133b67: Gained IPv6LL Mar 12 00:45:28.605528 systemd-networkd[1360]: calib5bf77b6444: Gained IPv6LL Mar 12 00:45:28.633430 kubelet[3156]: I0312 00:45:28.633325 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m9kwb" podStartSLOduration=43.633305159 podStartE2EDuration="43.633305159s" podCreationTimestamp="2026-03-12 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:45:27.631233775 +0000 UTC m=+48.320321964" watchObservedRunningTime="2026-03-12 00:45:28.633305159 +0000 UTC m=+49.322393348" Mar 12 00:45:28.925610 systemd-networkd[1360]: vxlan.calico: Gained IPv6LL Mar 12 00:45:28.989493 systemd-networkd[1360]: calie3e5d9c8029: Gained IPv6LL Mar 12 00:45:29.624302 containerd[1719]: time="2026-03-12T00:45:29.624250135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:29.626729 containerd[1719]: time="2026-03-12T00:45:29.626699617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 12 00:45:29.629576 containerd[1719]: time="2026-03-12T00:45:29.629467859Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:29.642250 containerd[1719]: time="2026-03-12T00:45:29.641522028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:29.642250 containerd[1719]: time="2026-03-12T00:45:29.642108389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.874788975s" Mar 12 00:45:29.642250 containerd[1719]: time="2026-03-12T00:45:29.642136309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 00:45:29.643340 containerd[1719]: time="2026-03-12T00:45:29.643310149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 00:45:29.657309 containerd[1719]: time="2026-03-12T00:45:29.657270760Z" level=info msg="CreateContainer within sandbox \"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 00:45:29.683000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162281905.mount: Deactivated successfully. Mar 12 00:45:30.329772 containerd[1719]: time="2026-03-12T00:45:30.329646659Z" level=info msg="CreateContainer within sandbox \"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"928ca2702c4ca885ea94516e511478330e88a8c27ccff9f79f7d5d9cccfc92fb\"" Mar 12 00:45:30.331572 containerd[1719]: time="2026-03-12T00:45:30.331534061Z" level=info msg="StartContainer for \"928ca2702c4ca885ea94516e511478330e88a8c27ccff9f79f7d5d9cccfc92fb\"" Mar 12 00:45:30.368669 systemd[1]: Started cri-containerd-928ca2702c4ca885ea94516e511478330e88a8c27ccff9f79f7d5d9cccfc92fb.scope - libcontainer container 928ca2702c4ca885ea94516e511478330e88a8c27ccff9f79f7d5d9cccfc92fb. Mar 12 00:45:30.405465 containerd[1719]: time="2026-03-12T00:45:30.405361035Z" level=info msg="StartContainer for \"928ca2702c4ca885ea94516e511478330e88a8c27ccff9f79f7d5d9cccfc92fb\" returns successfully" Mar 12 00:45:30.633664 kubelet[3156]: I0312 00:45:30.633070 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-784f7866bd-dcrfg" podStartSLOduration=25.755627828 podStartE2EDuration="28.633053965s" podCreationTimestamp="2026-03-12 00:45:02 +0000 UTC" firstStartedPulling="2026-03-12 00:45:26.765743332 +0000 UTC m=+47.454831521" lastFinishedPulling="2026-03-12 00:45:29.643169469 +0000 UTC m=+50.332257658" observedRunningTime="2026-03-12 00:45:30.630552323 +0000 UTC m=+51.319640592" watchObservedRunningTime="2026-03-12 00:45:30.633053965 +0000 UTC m=+51.322142154" Mar 12 00:45:31.618433 kubelet[3156]: I0312 00:45:31.618069 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:45:33.749503 containerd[1719]: time="2026-03-12T00:45:33.749450125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:33.752304 containerd[1719]: time="2026-03-12T00:45:33.752146767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 12 00:45:33.757270 containerd[1719]: time="2026-03-12T00:45:33.757219131Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:33.762595 containerd[1719]: time="2026-03-12T00:45:33.762559895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:33.763378 containerd[1719]: time="2026-03-12T00:45:33.763344416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.119997065s" Mar 12 00:45:33.763426 containerd[1719]: time="2026-03-12T00:45:33.763401976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 12 00:45:33.764449 containerd[1719]: time="2026-03-12T00:45:33.764422336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 00:45:33.785729 containerd[1719]: time="2026-03-12T00:45:33.785438352Z" level=info msg="CreateContainer within sandbox \"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 00:45:33.832940 containerd[1719]: time="2026-03-12T00:45:33.832324347Z" level=info msg="CreateContainer within sandbox \"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"176d7e1317cacdbc536bc9782765fd2342d788fbc27bf8eb13edf8b04d98cbf7\"" Mar 12 00:45:33.833348 containerd[1719]: time="2026-03-12T00:45:33.833327348Z" level=info msg="StartContainer for \"176d7e1317cacdbc536bc9782765fd2342d788fbc27bf8eb13edf8b04d98cbf7\"" Mar 12 00:45:33.863590 systemd[1]: Started cri-containerd-176d7e1317cacdbc536bc9782765fd2342d788fbc27bf8eb13edf8b04d98cbf7.scope - libcontainer container 176d7e1317cacdbc536bc9782765fd2342d788fbc27bf8eb13edf8b04d98cbf7. Mar 12 00:45:33.903721 containerd[1719]: time="2026-03-12T00:45:33.903674200Z" level=info msg="StartContainer for \"176d7e1317cacdbc536bc9782765fd2342d788fbc27bf8eb13edf8b04d98cbf7\" returns successfully" Mar 12 00:45:34.088618 containerd[1719]: time="2026-03-12T00:45:34.088435859Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:34.092085 containerd[1719]: time="2026-03-12T00:45:34.091855421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 00:45:34.094093 containerd[1719]: time="2026-03-12T00:45:34.094019583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 329.425567ms" Mar 12 00:45:34.094093 containerd[1719]: time="2026-03-12T00:45:34.094052303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 12 00:45:34.095802 containerd[1719]: time="2026-03-12T00:45:34.095654624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 00:45:34.101771 containerd[1719]: time="2026-03-12T00:45:34.101745029Z" level=info msg="CreateContainer within sandbox \"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 00:45:34.133278 containerd[1719]: time="2026-03-12T00:45:34.133120932Z" level=info msg="CreateContainer within sandbox \"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8beea66bb09e1e7b37383ff4aea51f2dbda3629b56512367db474052e31b9716\"" Mar 12 00:45:34.134013 containerd[1719]: time="2026-03-12T00:45:34.133984333Z" level=info msg="StartContainer for \"8beea66bb09e1e7b37383ff4aea51f2dbda3629b56512367db474052e31b9716\"" Mar 12 00:45:34.158555 systemd[1]: Started cri-containerd-8beea66bb09e1e7b37383ff4aea51f2dbda3629b56512367db474052e31b9716.scope - libcontainer container 8beea66bb09e1e7b37383ff4aea51f2dbda3629b56512367db474052e31b9716. Mar 12 00:45:34.191648 containerd[1719]: time="2026-03-12T00:45:34.191484696Z" level=info msg="StartContainer for \"8beea66bb09e1e7b37383ff4aea51f2dbda3629b56512367db474052e31b9716\" returns successfully" Mar 12 00:45:34.645973 kubelet[3156]: I0312 00:45:34.645190 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f4957d78b-vxbpg" podStartSLOduration=25.782774173 podStartE2EDuration="32.645176515s" podCreationTimestamp="2026-03-12 00:45:02 +0000 UTC" firstStartedPulling="2026-03-12 00:45:26.901876434 +0000 UTC m=+47.590964623" lastFinishedPulling="2026-03-12 00:45:33.764278776 +0000 UTC m=+54.453366965" observedRunningTime="2026-03-12 00:45:34.644464115 +0000 UTC m=+55.333552304" watchObservedRunningTime="2026-03-12 00:45:34.645176515 +0000 UTC m=+55.334264704" Mar 12 00:45:34.698952 kubelet[3156]: I0312 00:45:34.697798 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-784f7866bd-ccwm9" podStartSLOduration=25.555983123 podStartE2EDuration="32.697781554s" podCreationTimestamp="2026-03-12 00:45:02 +0000 UTC" firstStartedPulling="2026-03-12 00:45:26.953092032 +0000 UTC m=+47.642180221" lastFinishedPulling="2026-03-12 00:45:34.094890463 +0000 UTC m=+54.783978652" observedRunningTime="2026-03-12 00:45:34.669393693 +0000 UTC m=+55.358481882" watchObservedRunningTime="2026-03-12 00:45:34.697781554 +0000 UTC m=+55.386869743" Mar 12 00:45:35.634857 kubelet[3156]: I0312 00:45:35.634822 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:45:36.236595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035420958.mount: Deactivated successfully. Mar 12 00:45:36.582476 containerd[1719]: time="2026-03-12T00:45:36.582338884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:36.587721 containerd[1719]: time="2026-03-12T00:45:36.587682488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 12 00:45:36.590688 containerd[1719]: time="2026-03-12T00:45:36.590445290Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:36.594250 containerd[1719]: time="2026-03-12T00:45:36.594176453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:36.595150 containerd[1719]: time="2026-03-12T00:45:36.595032333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.499347469s" Mar 12 00:45:36.595150 containerd[1719]: time="2026-03-12T00:45:36.595065053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 12 00:45:36.597938 containerd[1719]: time="2026-03-12T00:45:36.597789095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 00:45:36.602623 containerd[1719]: time="2026-03-12T00:45:36.602594339Z" level=info msg="CreateContainer within sandbox \"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 00:45:36.634604 containerd[1719]: time="2026-03-12T00:45:36.634520043Z" level=info msg="CreateContainer within sandbox \"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab\"" Mar 12 00:45:36.636499 containerd[1719]: time="2026-03-12T00:45:36.635175323Z" level=info msg="StartContainer for \"27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab\"" Mar 12 00:45:36.694603 systemd[1]: Started cri-containerd-27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab.scope - libcontainer container 27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab. Mar 12 00:45:36.728228 containerd[1719]: time="2026-03-12T00:45:36.728111193Z" level=info msg="StartContainer for \"27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab\" returns successfully" Mar 12 00:45:38.081188 containerd[1719]: time="2026-03-12T00:45:38.081138485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:38.083709 containerd[1719]: time="2026-03-12T00:45:38.083657527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 12 00:45:38.087013 containerd[1719]: time="2026-03-12T00:45:38.086651889Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:38.090536 containerd[1719]: time="2026-03-12T00:45:38.090500812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:38.091452 containerd[1719]: time="2026-03-12T00:45:38.091420493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.493571277s" Mar 12 00:45:38.091517 containerd[1719]: time="2026-03-12T00:45:38.091454733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 12 00:45:38.098723 containerd[1719]: time="2026-03-12T00:45:38.098576458Z" level=info msg="CreateContainer within sandbox \"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 00:45:38.131859 containerd[1719]: time="2026-03-12T00:45:38.131818243Z" level=info msg="CreateContainer within sandbox \"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f55f185879b61196b2241d597a6f68f16df07e26ba6c758d1b7d488f60bc67fd\"" Mar 12 00:45:38.133540 containerd[1719]: time="2026-03-12T00:45:38.132964044Z" level=info msg="StartContainer for \"f55f185879b61196b2241d597a6f68f16df07e26ba6c758d1b7d488f60bc67fd\"" Mar 12 00:45:38.164575 systemd[1]: Started cri-containerd-f55f185879b61196b2241d597a6f68f16df07e26ba6c758d1b7d488f60bc67fd.scope - libcontainer container f55f185879b61196b2241d597a6f68f16df07e26ba6c758d1b7d488f60bc67fd. Mar 12 00:45:38.200655 containerd[1719]: time="2026-03-12T00:45:38.200611854Z" level=info msg="StartContainer for \"f55f185879b61196b2241d597a6f68f16df07e26ba6c758d1b7d488f60bc67fd\" returns successfully" Mar 12 00:45:38.203186 containerd[1719]: time="2026-03-12T00:45:38.203146176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 00:45:38.405298 containerd[1719]: time="2026-03-12T00:45:38.404856687Z" level=info msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" Mar 12 00:45:38.453654 kubelet[3156]: I0312 00:45:38.453100 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-rpsht" podStartSLOduration=28.822097389 podStartE2EDuration="38.453079643s" podCreationTimestamp="2026-03-12 00:45:00 +0000 UTC" firstStartedPulling="2026-03-12 00:45:26.96509776 +0000 UTC m=+47.654185949" lastFinishedPulling="2026-03-12 00:45:36.596080054 +0000 UTC m=+57.285168203" observedRunningTime="2026-03-12 00:45:37.683218067 +0000 UTC m=+58.372306256" watchObservedRunningTime="2026-03-12 00:45:38.453079643 +0000 UTC m=+59.142167792" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.451 [INFO][5459] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.451 [INFO][5459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" iface="eth0" netns="/var/run/netns/cni-e0e92f0f-1c78-c01f-e4f2-e95b334c38d9" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.453 [INFO][5459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" iface="eth0" netns="/var/run/netns/cni-e0e92f0f-1c78-c01f-e4f2-e95b334c38d9" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.454 [INFO][5459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" iface="eth0" netns="/var/run/netns/cni-e0e92f0f-1c78-c01f-e4f2-e95b334c38d9" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.454 [INFO][5459] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.454 [INFO][5459] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.474 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.474 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.474 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.482 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.482 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.484 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:38.488906 containerd[1719]: 2026-03-12 00:45:38.486 [INFO][5459] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:38.488906 containerd[1719]: time="2026-03-12T00:45:38.488653750Z" level=info msg="TearDown network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" successfully" Mar 12 00:45:38.488906 containerd[1719]: time="2026-03-12T00:45:38.488692510Z" level=info msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" returns successfully" Mar 12 00:45:38.490250 containerd[1719]: time="2026-03-12T00:45:38.490211231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6th7z,Uid:41eb878a-d7e0-41f9-bfd8-96489d627e74,Namespace:kube-system,Attempt:1,}" Mar 12 00:45:38.492470 systemd[1]: run-netns-cni\x2de0e92f0f\x2d1c78\x2dc01f\x2de4f2\x2de95b334c38d9.mount: Deactivated successfully. Mar 12 00:45:38.640795 systemd-networkd[1360]: cali7faa7b6cc39: Link UP Mar 12 00:45:38.641809 systemd-networkd[1360]: cali7faa7b6cc39: Gained carrier Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.564 [INFO][5472] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0 coredns-674b8bbfcf- kube-system 41eb878a-d7e0-41f9-bfd8-96489d627e74 1052 0 2026-03-12 00:44:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 coredns-674b8bbfcf-6th7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7faa7b6cc39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.564 [INFO][5472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.585 [INFO][5484] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" HandleID="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.594 [INFO][5484] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" HandleID="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"coredns-674b8bbfcf-6th7z", "timestamp":"2026-03-12 00:45:38.585678102 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a9080)} Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.594 [INFO][5484] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.594 [INFO][5484] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.594 [INFO][5484] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.597 [INFO][5484] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.603 [INFO][5484] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.606 [INFO][5484] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.608 [INFO][5484] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.610 [INFO][5484] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.611 [INFO][5484] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.612 [INFO][5484] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68 Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.620 [INFO][5484] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.631 [INFO][5484] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.199/26] block=192.168.20.192/26 handle="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.631 [INFO][5484] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.199/26] handle="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.631 [INFO][5484] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:38.673948 containerd[1719]: 2026-03-12 00:45:38.631 [INFO][5484] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.199/26] IPv6=[] ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" HandleID="k8s-pod-network.ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.635 [INFO][5472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41eb878a-d7e0-41f9-bfd8-96489d627e74", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"coredns-674b8bbfcf-6th7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7faa7b6cc39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.635 [INFO][5472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.199/32] ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.636 [INFO][5472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7faa7b6cc39 ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.641 [INFO][5472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.642 [INFO][5472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41eb878a-d7e0-41f9-bfd8-96489d627e74", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68", Pod:"coredns-674b8bbfcf-6th7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7faa7b6cc39", MAC:"fe:fd:fc:60:20:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:38.675618 containerd[1719]: 2026-03-12 00:45:38.670 [INFO][5472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68" Namespace="kube-system" Pod="coredns-674b8bbfcf-6th7z" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:38.725240 containerd[1719]: time="2026-03-12T00:45:38.724767286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:38.725240 containerd[1719]: time="2026-03-12T00:45:38.724845966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:38.725996 containerd[1719]: time="2026-03-12T00:45:38.725814327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:38.725996 containerd[1719]: time="2026-03-12T00:45:38.725953247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:38.756946 systemd[1]: run-containerd-runc-k8s.io-ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68-runc.1bnnQ2.mount: Deactivated successfully. Mar 12 00:45:38.766537 systemd[1]: Started cri-containerd-ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68.scope - libcontainer container ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68. Mar 12 00:45:39.132457 containerd[1719]: time="2026-03-12T00:45:39.132417991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6th7z,Uid:41eb878a-d7e0-41f9-bfd8-96489d627e74,Namespace:kube-system,Attempt:1,} returns sandbox id \"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68\"" Mar 12 00:45:39.402667 containerd[1719]: time="2026-03-12T00:45:39.402548833Z" level=info msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" Mar 12 00:45:39.413768 containerd[1719]: time="2026-03-12T00:45:39.413650162Z" level=info msg="StopPodSandbox for \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\"" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.489 [WARNING][5601] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"f4641c63-b65a-46ce-b0bc-583db4549b9d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c", Pod:"calico-apiserver-784f7866bd-dcrfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f75afcfeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5601] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" iface="eth0" netns="" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5601] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5601] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.611 [INFO][5614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.611 [INFO][5614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.611 [INFO][5614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.620 [WARNING][5614] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.620 [INFO][5614] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.621 [INFO][5614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:39.625536 containerd[1719]: 2026-03-12 00:45:39.624 [INFO][5601] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.626007 containerd[1719]: time="2026-03-12T00:45:39.625660080Z" level=info msg="TearDown network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" successfully" Mar 12 00:45:39.626007 containerd[1719]: time="2026-03-12T00:45:39.625686120Z" level=info msg="StopPodSandbox for \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" returns successfully" Mar 12 00:45:39.626819 containerd[1719]: time="2026-03-12T00:45:39.626705161Z" level=info msg="RemovePodSandbox for \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\"" Mar 12 00:45:39.628142 containerd[1719]: time="2026-03-12T00:45:39.628080202Z" level=info msg="Forcibly stopping sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\"" Mar 12 00:45:39.631428 containerd[1719]: time="2026-03-12T00:45:39.631280764Z" level=info msg="CreateContainer within sandbox \"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.485 [INFO][5586] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.485 [INFO][5586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" iface="eth0" netns="/var/run/netns/cni-82487d3e-a94f-30f4-39e4-3511d6c97b18" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.485 [INFO][5586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" iface="eth0" netns="/var/run/netns/cni-82487d3e-a94f-30f4-39e4-3511d6c97b18" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.488 [INFO][5586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" iface="eth0" netns="/var/run/netns/cni-82487d3e-a94f-30f4-39e4-3511d6c97b18" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5586] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.583 [INFO][5586] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.614 [INFO][5615] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.615 [INFO][5615] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.621 [INFO][5615] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.633 [WARNING][5615] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.633 [INFO][5615] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.635 [INFO][5615] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:39.640554 containerd[1719]: 2026-03-12 00:45:39.638 [INFO][5586] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:45:39.643728 containerd[1719]: time="2026-03-12T00:45:39.643567974Z" level=info msg="TearDown network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" successfully" Mar 12 00:45:39.643728 containerd[1719]: time="2026-03-12T00:45:39.643602654Z" level=info msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" returns successfully" Mar 12 00:45:39.644959 containerd[1719]: time="2026-03-12T00:45:39.644734894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b7676,Uid:fe7afae4-b713-43a4-990f-e7b7f89a4386,Namespace:calico-system,Attempt:1,}" Mar 12 00:45:39.672636 systemd[1]: run-netns-cni\x2d82487d3e\x2da94f\x2d30f4\x2d39e4\x2d3511d6c97b18.mount: Deactivated successfully. Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.675 [WARNING][5638] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"f4641c63-b65a-46ce-b0bc-583db4549b9d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ddad0a57d212ecd83ad197a6bb7af7a25e5dc209f8637e8cb5846320c8f4a91c", Pod:"calico-apiserver-784f7866bd-dcrfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f75afcfeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.675 [INFO][5638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.675 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" iface="eth0" netns="" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.675 [INFO][5638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.675 [INFO][5638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.694 [INFO][5645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.694 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.694 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.702 [WARNING][5645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.702 [INFO][5645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" HandleID="k8s-pod-network.fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--dcrfg-eth0" Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.703 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:39.706751 containerd[1719]: 2026-03-12 00:45:39.705 [INFO][5638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4" Mar 12 00:45:39.707984 containerd[1719]: time="2026-03-12T00:45:39.707256021Z" level=info msg="TearDown network for sandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" successfully" Mar 12 00:45:40.012410 containerd[1719]: time="2026-03-12T00:45:40.011959329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:40.012410 containerd[1719]: time="2026-03-12T00:45:40.012036569Z" level=info msg="RemovePodSandbox \"fe1e2362574d96ce96e79ee89996a0e42b1c3a6645ff4372d3693a22ad0eb8c4\" returns successfully" Mar 12 00:45:40.012824 containerd[1719]: time="2026-03-12T00:45:40.012798050Z" level=info msg="StopPodSandbox for \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\"" Mar 12 00:45:40.076552 containerd[1719]: time="2026-03-12T00:45:40.076102897Z" level=info msg="CreateContainer within sandbox \"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4dbf5513202f195139bf9eabf3e126707357a7a3238e5e15ba132c47ece3702\"" Mar 12 00:45:40.094101 containerd[1719]: time="2026-03-12T00:45:40.092621349Z" level=info msg="StartContainer for \"e4dbf5513202f195139bf9eabf3e126707357a7a3238e5e15ba132c47ece3702\"" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.062 [WARNING][5660] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.062 [INFO][5660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.062 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" iface="eth0" netns="" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.062 [INFO][5660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.062 [INFO][5660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.096 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.097 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.097 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.111 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.111 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.114 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.127006 containerd[1719]: 2026-03-12 00:45:40.121 [INFO][5660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.129235 containerd[1719]: time="2026-03-12T00:45:40.129206377Z" level=info msg="TearDown network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" successfully" Mar 12 00:45:40.129651 containerd[1719]: time="2026-03-12T00:45:40.129625057Z" level=info msg="StopPodSandbox for \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" returns successfully" Mar 12 00:45:40.130529 containerd[1719]: time="2026-03-12T00:45:40.130507658Z" level=info msg="RemovePodSandbox for \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\"" Mar 12 00:45:40.131013 containerd[1719]: time="2026-03-12T00:45:40.130990938Z" level=info msg="Forcibly stopping sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\"" Mar 12 00:45:40.134312 systemd[1]: Started cri-containerd-e4dbf5513202f195139bf9eabf3e126707357a7a3238e5e15ba132c47ece3702.scope - libcontainer container e4dbf5513202f195139bf9eabf3e126707357a7a3238e5e15ba132c47ece3702. Mar 12 00:45:40.182709 containerd[1719]: time="2026-03-12T00:45:40.182646497Z" level=info msg="StartContainer for \"e4dbf5513202f195139bf9eabf3e126707357a7a3238e5e15ba132c47ece3702\" returns successfully" Mar 12 00:45:40.292077 systemd-networkd[1360]: cali492c24933fb: Link UP Mar 12 00:45:40.293897 systemd-networkd[1360]: cali492c24933fb: Gained carrier Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.215 [WARNING][5720] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.215 [INFO][5720] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.215 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" iface="eth0" netns="" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.215 [INFO][5720] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.215 [INFO][5720] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.241 [INFO][5746] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.241 [INFO][5746] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.279 [INFO][5746] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.299 [WARNING][5746] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.299 [INFO][5746] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" HandleID="k8s-pod-network.e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Workload="ci--4081.3.6--n--d10d02cd33-k8s-whisker--f56947b46--d2zh8-eth0" Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.300 [INFO][5746] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.304294 containerd[1719]: 2026-03-12 00:45:40.302 [INFO][5720] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088" Mar 12 00:45:40.305336 containerd[1719]: time="2026-03-12T00:45:40.304734748Z" level=info msg="TearDown network for sandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" successfully" Mar 12 00:45:40.316434 containerd[1719]: time="2026-03-12T00:45:40.316268597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:40.316434 containerd[1719]: time="2026-03-12T00:45:40.316358677Z" level=info msg="RemovePodSandbox \"e00028547037c24ef1a833e07b45b8db86911023268686b7dd435dbf08d67088\" returns successfully" Mar 12 00:45:40.318250 containerd[1719]: time="2026-03-12T00:45:40.317113317Z" level=info msg="StopPodSandbox for \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\"" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.166 [INFO][5673] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0 csi-node-driver- calico-system fe7afae4-b713-43a4-990f-e7b7f89a4386 1060 0 2026-03-12 00:45:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-d10d02cd33 csi-node-driver-b7676 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali492c24933fb [] [] }} ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.167 [INFO][5673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.214 [INFO][5733] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" HandleID="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.227 [INFO][5733] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" HandleID="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d10d02cd33", "pod":"csi-node-driver-b7676", "timestamp":"2026-03-12 00:45:40.214781601 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d10d02cd33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d9080)} Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.228 [INFO][5733] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.228 [INFO][5733] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.228 [INFO][5733] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d10d02cd33' Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.237 [INFO][5733] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.243 [INFO][5733] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.248 [INFO][5733] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.250 [INFO][5733] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.253 [INFO][5733] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.254 [INFO][5733] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.261 [INFO][5733] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549 Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.266 [INFO][5733] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.278 [INFO][5733] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.200/26] block=192.168.20.192/26 handle="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.278 [INFO][5733] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.200/26] handle="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" host="ci-4081.3.6-n-d10d02cd33" Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.278 [INFO][5733] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.318924 containerd[1719]: 2026-03-12 00:45:40.278 [INFO][5733] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.200/26] IPv6=[] ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" HandleID="k8s-pod-network.0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.282 [INFO][5673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7afae4-b713-43a4-990f-e7b7f89a4386", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"", Pod:"csi-node-driver-b7676", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali492c24933fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.282 [INFO][5673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.200/32] ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.282 [INFO][5673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali492c24933fb ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.295 [INFO][5673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.297 [INFO][5673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7afae4-b713-43a4-990f-e7b7f89a4386", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549", Pod:"csi-node-driver-b7676", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali492c24933fb", MAC:"b2:0b:0c:b4:d2:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.319934 containerd[1719]: 2026-03-12 00:45:40.315 [INFO][5673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549" Namespace="calico-system" Pod="csi-node-driver-b7676" WorkloadEndpoint="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:45:40.359215 containerd[1719]: time="2026-03-12T00:45:40.358783628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 00:45:40.359215 containerd[1719]: time="2026-03-12T00:45:40.358848229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 00:45:40.359215 containerd[1719]: time="2026-03-12T00:45:40.358858949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:40.359215 containerd[1719]: time="2026-03-12T00:45:40.358938629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 00:45:40.384817 systemd[1]: Started cri-containerd-0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549.scope - libcontainer container 0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549. Mar 12 00:45:40.444702 containerd[1719]: time="2026-03-12T00:45:40.444662573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b7676,Uid:fe7afae4-b713-43a4-990f-e7b7f89a4386,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549\"" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.390 [WARNING][5773] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0", GenerateName:"calico-kube-controllers-7f4957d78b-", Namespace:"calico-system", SelfLink:"", UID:"2b918b6d-d1dc-41b7-9960-5c26c50358cf", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4957d78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62", Pod:"calico-kube-controllers-7f4957d78b-vxbpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid93add5251f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.392 [INFO][5773] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.392 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" iface="eth0" netns="" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.392 [INFO][5773] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.392 [INFO][5773] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.423 [INFO][5818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.423 [INFO][5818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.423 [INFO][5818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.440 [WARNING][5818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.440 [INFO][5818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.442 [INFO][5818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.449637 containerd[1719]: 2026-03-12 00:45:40.447 [INFO][5773] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.450451 containerd[1719]: time="2026-03-12T00:45:40.450419977Z" level=info msg="TearDown network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" successfully" Mar 12 00:45:40.450451 containerd[1719]: time="2026-03-12T00:45:40.450450217Z" level=info msg="StopPodSandbox for \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" returns successfully" Mar 12 00:45:40.451057 containerd[1719]: time="2026-03-12T00:45:40.451030937Z" level=info msg="RemovePodSandbox for \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\"" Mar 12 00:45:40.451163 containerd[1719]: time="2026-03-12T00:45:40.451147298Z" level=info msg="Forcibly stopping sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\"" Mar 12 00:45:40.509602 systemd-networkd[1360]: cali7faa7b6cc39: Gained IPv6LL Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.491 [WARNING][5843] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0", GenerateName:"calico-kube-controllers-7f4957d78b-", Namespace:"calico-system", SelfLink:"", UID:"2b918b6d-d1dc-41b7-9960-5c26c50358cf", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4957d78b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"8ac61cb5f0c35e18109cbc3e8793e2b931d2215ceb7a8cc05a05791c53987f62", Pod:"calico-kube-controllers-7f4957d78b-vxbpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid93add5251f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.492 [INFO][5843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.492 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" iface="eth0" netns="" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.492 [INFO][5843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.492 [INFO][5843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.512 [INFO][5851] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.512 [INFO][5851] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.512 [INFO][5851] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.528 [WARNING][5851] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.528 [INFO][5851] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" HandleID="k8s-pod-network.3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--kube--controllers--7f4957d78b--vxbpg-eth0" Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.530 [INFO][5851] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.534543 containerd[1719]: 2026-03-12 00:45:40.532 [INFO][5843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9" Mar 12 00:45:40.535029 containerd[1719]: time="2026-03-12T00:45:40.534584760Z" level=info msg="TearDown network for sandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" successfully" Mar 12 00:45:40.542627 containerd[1719]: time="2026-03-12T00:45:40.542532246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:40.542627 containerd[1719]: time="2026-03-12T00:45:40.542607326Z" level=info msg="RemovePodSandbox \"3a7a3738bc6dcc1caaf3be6c55c44cca252d1f25b0bdcca6f0b18c5f1734e0c9\" returns successfully" Mar 12 00:45:40.544139 containerd[1719]: time="2026-03-12T00:45:40.543715127Z" level=info msg="StopPodSandbox for \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\"" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.580 [WARNING][5870] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bb4b0ced-686e-4023-85d0-51713ff7caee", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d", Pod:"coredns-674b8bbfcf-m9kwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256c3133b67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.580 [INFO][5870] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.580 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" iface="eth0" netns="" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.580 [INFO][5870] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.580 [INFO][5870] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.601 [INFO][5877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.601 [INFO][5877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.601 [INFO][5877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.610 [WARNING][5877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.610 [INFO][5877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.612 [INFO][5877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.615804 containerd[1719]: 2026-03-12 00:45:40.614 [INFO][5870] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.616814 containerd[1719]: time="2026-03-12T00:45:40.616278264Z" level=info msg="TearDown network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" successfully" Mar 12 00:45:40.616814 containerd[1719]: time="2026-03-12T00:45:40.616308264Z" level=info msg="StopPodSandbox for \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" returns successfully" Mar 12 00:45:40.617227 containerd[1719]: time="2026-03-12T00:45:40.616968424Z" level=info msg="RemovePodSandbox for \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\"" Mar 12 00:45:40.617227 containerd[1719]: time="2026-03-12T00:45:40.616998104Z" level=info msg="Forcibly stopping sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\"" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.652 [WARNING][5891] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bb4b0ced-686e-4023-85d0-51713ff7caee", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"46ca9831c9d30935c74b8c04e1911bdf3f2105110f41e6da7474606822cbd85d", Pod:"coredns-674b8bbfcf-m9kwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256c3133b67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.652 [INFO][5891] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.652 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" iface="eth0" netns="" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.652 [INFO][5891] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.652 [INFO][5891] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.673 [INFO][5899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.673 [INFO][5899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.673 [INFO][5899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.688 [WARNING][5899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.688 [INFO][5899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" HandleID="k8s-pod-network.ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--m9kwb-eth0" Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.689 [INFO][5899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.694551 containerd[1719]: 2026-03-12 00:45:40.692 [INFO][5891] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb" Mar 12 00:45:40.696205 containerd[1719]: time="2026-03-12T00:45:40.695073691Z" level=info msg="TearDown network for sandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" successfully" Mar 12 00:45:40.703406 kubelet[3156]: I0312 00:45:40.702219 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6th7z" podStartSLOduration=55.702198457 podStartE2EDuration="55.702198457s" podCreationTimestamp="2026-03-12 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 00:45:40.701858697 +0000 UTC m=+61.390946886" watchObservedRunningTime="2026-03-12 00:45:40.702198457 +0000 UTC m=+61.391286806" Mar 12 00:45:40.704876 containerd[1719]: time="2026-03-12T00:45:40.704823499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:40.704976 containerd[1719]: time="2026-03-12T00:45:40.704898100Z" level=info msg="RemovePodSandbox \"ea576db81133aa53d88cc2b48ae86dd9a17866f2310e040c9c0a3b1d295b2bfb\" returns successfully" Mar 12 00:45:40.705544 containerd[1719]: time="2026-03-12T00:45:40.705431580Z" level=info msg="StopPodSandbox for \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\"" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.762 [WARNING][5913] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"0562416e-6f3d-4639-bbce-d4ae1ac939e1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3", Pod:"calico-apiserver-784f7866bd-ccwm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdc0c100907", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.762 [INFO][5913] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.762 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" iface="eth0" netns="" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.762 [INFO][5913] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.762 [INFO][5913] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.798 [INFO][5923] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.798 [INFO][5923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.798 [INFO][5923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.807 [WARNING][5923] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.807 [INFO][5923] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.808 [INFO][5923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.813171 containerd[1719]: 2026-03-12 00:45:40.810 [INFO][5913] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.813171 containerd[1719]: time="2026-03-12T00:45:40.812931592Z" level=info msg="TearDown network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" successfully" Mar 12 00:45:40.813171 containerd[1719]: time="2026-03-12T00:45:40.812956512Z" level=info msg="StopPodSandbox for \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" returns successfully" Mar 12 00:45:40.814642 containerd[1719]: time="2026-03-12T00:45:40.813580953Z" level=info msg="RemovePodSandbox for \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\"" Mar 12 00:45:40.814642 containerd[1719]: time="2026-03-12T00:45:40.813636993Z" level=info msg="Forcibly stopping sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\"" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.857 [WARNING][5939] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0", GenerateName:"calico-apiserver-784f7866bd-", Namespace:"calico-system", SelfLink:"", UID:"0562416e-6f3d-4639-bbce-d4ae1ac939e1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784f7866bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"6ef49b7d16cca9470c50e6ee3db7e78bc937eef516585de3885fb06e757f94b3", Pod:"calico-apiserver-784f7866bd-ccwm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdc0c100907", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.857 [INFO][5939] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.857 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" iface="eth0" netns="" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.857 [INFO][5939] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.857 [INFO][5939] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.878 [INFO][5947] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.879 [INFO][5947] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.879 [INFO][5947] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.887 [WARNING][5947] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.887 [INFO][5947] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" HandleID="k8s-pod-network.511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Workload="ci--4081.3.6--n--d10d02cd33-k8s-calico--apiserver--784f7866bd--ccwm9-eth0" Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.888 [INFO][5947] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.892500 containerd[1719]: 2026-03-12 00:45:40.891 [INFO][5939] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb" Mar 12 00:45:40.892904 containerd[1719]: time="2026-03-12T00:45:40.892556581Z" level=info msg="TearDown network for sandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" successfully" Mar 12 00:45:40.899425 containerd[1719]: time="2026-03-12T00:45:40.899358826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:40.899544 containerd[1719]: time="2026-03-12T00:45:40.899466826Z" level=info msg="RemovePodSandbox \"511b42ed3bc1e2ebdc7d7de4a3c296f278a39f8fc139f0ca5e3c6527dc002ecb\" returns successfully" Mar 12 00:45:40.899918 containerd[1719]: time="2026-03-12T00:45:40.899889907Z" level=info msg="StopPodSandbox for \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\"" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.933 [WARNING][5961] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"da3b3329-2fdd-41d7-bb40-059c94905ea3", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc", Pod:"goldmane-5b85766d88-rpsht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib5bf77b6444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.934 [INFO][5961] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.934 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" iface="eth0" netns="" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.934 [INFO][5961] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.934 [INFO][5961] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.952 [INFO][5968] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.952 [INFO][5968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.952 [INFO][5968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.961 [WARNING][5968] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.961 [INFO][5968] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.963 [INFO][5968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:40.967255 containerd[1719]: 2026-03-12 00:45:40.965 [INFO][5961] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:40.967678 containerd[1719]: time="2026-03-12T00:45:40.967297525Z" level=info msg="TearDown network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" successfully" Mar 12 00:45:40.967678 containerd[1719]: time="2026-03-12T00:45:40.967330045Z" level=info msg="StopPodSandbox for \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" returns successfully" Mar 12 00:45:40.968023 containerd[1719]: time="2026-03-12T00:45:40.967987365Z" level=info msg="RemovePodSandbox for \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\"" Mar 12 00:45:40.968074 containerd[1719]: time="2026-03-12T00:45:40.968031405Z" level=info msg="Forcibly stopping sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\"" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.000 [WARNING][5982] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"da3b3329-2fdd-41d7-bb40-059c94905ea3", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"cfdffbb4c2060132dba8f1130f954aa243bf78340b6d425c7eeec44165d0c6dc", Pod:"goldmane-5b85766d88-rpsht", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib5bf77b6444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.000 [INFO][5982] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.000 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" iface="eth0" netns="" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.000 [INFO][5982] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.000 [INFO][5982] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.020 [INFO][5989] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.020 [INFO][5989] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.020 [INFO][5989] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.029 [WARNING][5989] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.029 [INFO][5989] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" HandleID="k8s-pod-network.1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Workload="ci--4081.3.6--n--d10d02cd33-k8s-goldmane--5b85766d88--rpsht-eth0" Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.030 [INFO][5989] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:41.034082 containerd[1719]: 2026-03-12 00:45:41.032 [INFO][5982] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f" Mar 12 00:45:41.034498 containerd[1719]: time="2026-03-12T00:45:41.034131982Z" level=info msg="TearDown network for sandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" successfully" Mar 12 00:45:41.040898 containerd[1719]: time="2026-03-12T00:45:41.040856668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:41.040998 containerd[1719]: time="2026-03-12T00:45:41.040935868Z" level=info msg="RemovePodSandbox \"1cf873a1ae5d22ac77ac7ac19295861cfdb024003b2a9ded01f3fd3c26d8265f\" returns successfully" Mar 12 00:45:41.041771 containerd[1719]: time="2026-03-12T00:45:41.041503868Z" level=info msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.074 [WARNING][6003] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41eb878a-d7e0-41f9-bfd8-96489d627e74", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68", Pod:"coredns-674b8bbfcf-6th7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7faa7b6cc39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.074 [INFO][6003] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.074 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" iface="eth0" netns="" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.074 [INFO][6003] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.074 [INFO][6003] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.096 [INFO][6010] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.096 [INFO][6010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.096 [INFO][6010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.106 [WARNING][6010] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.106 [INFO][6010] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.108 [INFO][6010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:41.111698 containerd[1719]: 2026-03-12 00:45:41.110 [INFO][6003] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.111698 containerd[1719]: time="2026-03-12T00:45:41.111665009Z" level=info msg="TearDown network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" successfully" Mar 12 00:45:41.111698 containerd[1719]: time="2026-03-12T00:45:41.111696849Z" level=info msg="StopPodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" returns successfully" Mar 12 00:45:41.112344 containerd[1719]: time="2026-03-12T00:45:41.112198609Z" level=info msg="RemovePodSandbox for \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" Mar 12 00:45:41.112344 containerd[1719]: time="2026-03-12T00:45:41.112229449Z" level=info msg="Forcibly stopping sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\"" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.147 [WARNING][6024] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"41eb878a-d7e0-41f9-bfd8-96489d627e74", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 44, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"ba794b8792ceda843fcb796f29bf5eac1b48c23eaebc24f849ac10525b296a68", Pod:"coredns-674b8bbfcf-6th7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7faa7b6cc39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.147 [INFO][6024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.147 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" iface="eth0" netns="" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.147 [INFO][6024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.147 [INFO][6024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.167 [INFO][6031] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.167 [INFO][6031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.167 [INFO][6031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.176 [WARNING][6031] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.176 [INFO][6031] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" HandleID="k8s-pod-network.d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Workload="ci--4081.3.6--n--d10d02cd33-k8s-coredns--674b8bbfcf--6th7z-eth0" Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.177 [INFO][6031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:45:41.181357 containerd[1719]: 2026-03-12 00:45:41.179 [INFO][6024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5" Mar 12 00:45:41.181840 containerd[1719]: time="2026-03-12T00:45:41.181410908Z" level=info msg="TearDown network for sandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" successfully" Mar 12 00:45:41.188541 containerd[1719]: time="2026-03-12T00:45:41.188500994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:45:41.188869 containerd[1719]: time="2026-03-12T00:45:41.188571155Z" level=info msg="RemovePodSandbox \"d5f30fc07c41e0b82ac8acb7d5be7496e2182f8ff63e75861845209c4398a3e5\" returns successfully" Mar 12 00:45:41.497049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512149040.mount: Deactivated successfully. Mar 12 00:45:41.539598 containerd[1719]: time="2026-03-12T00:45:41.539547416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:41.541909 containerd[1719]: time="2026-03-12T00:45:41.541881578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 12 00:45:41.544517 containerd[1719]: time="2026-03-12T00:45:41.544467380Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:41.548140 containerd[1719]: time="2026-03-12T00:45:41.548075623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:41.549419 containerd[1719]: time="2026-03-12T00:45:41.548728504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 3.345537248s" Mar 12 00:45:41.549419 containerd[1719]: time="2026-03-12T00:45:41.548759544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 12 00:45:41.550535 containerd[1719]: time="2026-03-12T00:45:41.549680104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 00:45:41.555205 containerd[1719]: time="2026-03-12T00:45:41.555175149Z" level=info msg="CreateContainer within sandbox \"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 00:45:41.586975 containerd[1719]: time="2026-03-12T00:45:41.586929656Z" level=info msg="CreateContainer within sandbox \"ca505367b328fcaf522d0c055e1eed49f0c4fb16f878bfe159df7af164d43429\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f244d9ce9a98c30220fab25c689103be8edb32fe33d6480078ce4ab6df4a6395\"" Mar 12 00:45:41.589989 containerd[1719]: time="2026-03-12T00:45:41.588772298Z" level=info msg="StartContainer for \"f244d9ce9a98c30220fab25c689103be8edb32fe33d6480078ce4ab6df4a6395\"" Mar 12 00:45:41.611554 systemd[1]: Started cri-containerd-f244d9ce9a98c30220fab25c689103be8edb32fe33d6480078ce4ab6df4a6395.scope - libcontainer container f244d9ce9a98c30220fab25c689103be8edb32fe33d6480078ce4ab6df4a6395. Mar 12 00:45:41.644710 containerd[1719]: time="2026-03-12T00:45:41.644667386Z" level=info msg="StartContainer for \"f244d9ce9a98c30220fab25c689103be8edb32fe33d6480078ce4ab6df4a6395\" returns successfully" Mar 12 00:45:41.789569 systemd-networkd[1360]: cali492c24933fb: Gained IPv6LL Mar 12 00:45:43.098812 containerd[1719]: time="2026-03-12T00:45:43.098760993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:43.101156 containerd[1719]: time="2026-03-12T00:45:43.101126675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 12 00:45:43.104104 containerd[1719]: time="2026-03-12T00:45:43.104074078Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:43.110474 containerd[1719]: time="2026-03-12T00:45:43.110413923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:43.111145 containerd[1719]: time="2026-03-12T00:45:43.110978004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.5612603s" Mar 12 00:45:43.111145 containerd[1719]: time="2026-03-12T00:45:43.111009724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 12 00:45:43.117607 containerd[1719]: time="2026-03-12T00:45:43.117576850Z" level=info msg="CreateContainer within sandbox \"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 00:45:43.146641 containerd[1719]: time="2026-03-12T00:45:43.146591634Z" level=info msg="CreateContainer within sandbox \"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677\"" Mar 12 00:45:43.148441 containerd[1719]: time="2026-03-12T00:45:43.147273635Z" level=info msg="StartContainer for \"b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677\"" Mar 12 00:45:43.183552 systemd[1]: run-containerd-runc-k8s.io-b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677-runc.C7d6Lu.mount: Deactivated successfully. Mar 12 00:45:43.193542 systemd[1]: Started cri-containerd-b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677.scope - libcontainer container b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677. Mar 12 00:45:43.224522 containerd[1719]: time="2026-03-12T00:45:43.224477461Z" level=info msg="StartContainer for \"b335f4cebe86c69558ed27619107d9beaac116efc8500ea76b98bf997bdd8677\" returns successfully" Mar 12 00:45:43.226331 containerd[1719]: time="2026-03-12T00:45:43.226207863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 00:45:45.044788 containerd[1719]: time="2026-03-12T00:45:45.044723463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:45.047416 containerd[1719]: time="2026-03-12T00:45:45.047385265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 12 00:45:45.050519 containerd[1719]: time="2026-03-12T00:45:45.050468068Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:45.054964 containerd[1719]: time="2026-03-12T00:45:45.054900432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 00:45:45.055664 containerd[1719]: time="2026-03-12T00:45:45.055363352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.829123609s" Mar 12 00:45:45.055664 containerd[1719]: time="2026-03-12T00:45:45.055422872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 12 00:45:45.063973 containerd[1719]: time="2026-03-12T00:45:45.063940599Z" level=info msg="CreateContainer within sandbox \"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 00:45:45.102096 containerd[1719]: time="2026-03-12T00:45:45.102044952Z" level=info msg="CreateContainer within sandbox \"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b9b14d22bc207a81030ad39ff2b94e7473ae14b4e1242ad12887c368f2b499f6\"" Mar 12 00:45:45.103497 containerd[1719]: time="2026-03-12T00:45:45.103433353Z" level=info msg="StartContainer for \"b9b14d22bc207a81030ad39ff2b94e7473ae14b4e1242ad12887c368f2b499f6\"" Mar 12 00:45:45.137526 systemd[1]: Started cri-containerd-b9b14d22bc207a81030ad39ff2b94e7473ae14b4e1242ad12887c368f2b499f6.scope - libcontainer container b9b14d22bc207a81030ad39ff2b94e7473ae14b4e1242ad12887c368f2b499f6. Mar 12 00:45:45.167528 containerd[1719]: time="2026-03-12T00:45:45.167317568Z" level=info msg="StartContainer for \"b9b14d22bc207a81030ad39ff2b94e7473ae14b4e1242ad12887c368f2b499f6\" returns successfully" Mar 12 00:45:45.498632 kubelet[3156]: I0312 00:45:45.498540 3156 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 00:45:45.498632 kubelet[3156]: I0312 00:45:45.498576 3156 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 00:45:45.724727 kubelet[3156]: I0312 00:45:45.724662 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-747bbc7567-c8fnh" podStartSLOduration=5.677333181 podStartE2EDuration="19.724646406s" podCreationTimestamp="2026-03-12 00:45:26 +0000 UTC" firstStartedPulling="2026-03-12 00:45:27.502219759 +0000 UTC m=+48.191307948" lastFinishedPulling="2026-03-12 00:45:41.549532984 +0000 UTC m=+62.238621173" observedRunningTime="2026-03-12 00:45:41.708460681 +0000 UTC m=+62.397548870" watchObservedRunningTime="2026-03-12 00:45:45.724646406 +0000 UTC m=+66.413734555" Mar 12 00:45:45.724920 kubelet[3156]: I0312 00:45:45.724895 3156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b7676" podStartSLOduration=39.115269068 podStartE2EDuration="43.724890887s" podCreationTimestamp="2026-03-12 00:45:02 +0000 UTC" firstStartedPulling="2026-03-12 00:45:40.446587854 +0000 UTC m=+61.135676043" lastFinishedPulling="2026-03-12 00:45:45.056209673 +0000 UTC m=+65.745297862" observedRunningTime="2026-03-12 00:45:45.722898165 +0000 UTC m=+66.411986354" watchObservedRunningTime="2026-03-12 00:45:45.724890887 +0000 UTC m=+66.413979076" Mar 12 00:45:56.621266 systemd[1]: run-containerd-runc-k8s.io-99d49febdaab663dd356b671a6315d7c5f066e052dbeb8df852c8a714c2410c8-runc.1s1Ymp.mount: Deactivated successfully. Mar 12 00:45:59.627158 kubelet[3156]: I0312 00:45:59.627118 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:46:21.455911 kubelet[3156]: I0312 00:46:21.455487 3156 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 00:46:41.192093 containerd[1719]: time="2026-03-12T00:46:41.192055217Z" level=info msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.230 [WARNING][6402] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7afae4-b713-43a4-990f-e7b7f89a4386", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549", Pod:"csi-node-driver-b7676", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali492c24933fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.230 [INFO][6402] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.230 [INFO][6402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" iface="eth0" netns="" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.230 [INFO][6402] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.230 [INFO][6402] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.249 [INFO][6409] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.250 [INFO][6409] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.250 [INFO][6409] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.259 [WARNING][6409] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.259 [INFO][6409] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.260 [INFO][6409] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:46:41.264940 containerd[1719]: 2026-03-12 00:46:41.262 [INFO][6402] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.265681 containerd[1719]: time="2026-03-12T00:46:41.265389736Z" level=info msg="TearDown network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" successfully" Mar 12 00:46:41.265681 containerd[1719]: time="2026-03-12T00:46:41.265430537Z" level=info msg="StopPodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" returns successfully" Mar 12 00:46:41.265951 containerd[1719]: time="2026-03-12T00:46:41.265925657Z" level=info msg="RemovePodSandbox for \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" Mar 12 00:46:41.266015 containerd[1719]: time="2026-03-12T00:46:41.265962337Z" level=info msg="Forcibly stopping sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\"" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.298 [WARNING][6423] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe7afae4-b713-43a4-990f-e7b7f89a4386", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 0, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d10d02cd33", ContainerID:"0e7c031678d770727b14f90fc4cf957306075d71a616004a82a1810cab040549", Pod:"csi-node-driver-b7676", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali492c24933fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.298 [INFO][6423] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.298 [INFO][6423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" iface="eth0" netns="" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.298 [INFO][6423] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.298 [INFO][6423] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.319 [INFO][6430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.319 [INFO][6430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.319 [INFO][6430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.327 [WARNING][6430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.327 [INFO][6430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" HandleID="k8s-pod-network.b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Workload="ci--4081.3.6--n--d10d02cd33-k8s-csi--node--driver--b7676-eth0" Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.328 [INFO][6430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 00:46:41.332396 containerd[1719]: 2026-03-12 00:46:41.330 [INFO][6423] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018" Mar 12 00:46:41.332828 containerd[1719]: time="2026-03-12T00:46:41.332522653Z" level=info msg="TearDown network for sandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" successfully" Mar 12 00:46:41.341694 containerd[1719]: time="2026-03-12T00:46:41.341649098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 00:46:41.341799 containerd[1719]: time="2026-03-12T00:46:41.341734418Z" level=info msg="RemovePodSandbox \"b9b653f94b58cb72448a8df37af3ab3d749e84bf5ba6978f8cace4d1d2856018\" returns successfully" Mar 12 00:46:56.482639 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:60118.service - OpenSSH per-connection server daemon (10.200.16.10:60118). Mar 12 00:46:56.972925 sshd[6465]: Accepted publickey for core from 10.200.16.10 port 60118 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:46:56.974930 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:46:56.979597 systemd-logind[1701]: New session 10 of user core. Mar 12 00:46:56.985524 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 00:46:57.412589 sshd[6465]: pam_unix(sshd:session): session closed for user core Mar 12 00:46:57.416093 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:60118.service: Deactivated successfully. Mar 12 00:46:57.418311 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 00:46:57.420072 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Mar 12 00:46:57.421024 systemd-logind[1701]: Removed session 10. Mar 12 00:47:02.501651 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:38532.service - OpenSSH per-connection server daemon (10.200.16.10:38532). Mar 12 00:47:02.993097 sshd[6537]: Accepted publickey for core from 10.200.16.10 port 38532 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:03.009634 sshd[6537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:03.016041 systemd-logind[1701]: New session 11 of user core. Mar 12 00:47:03.021772 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 00:47:03.422687 sshd[6537]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:03.425872 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Mar 12 00:47:03.427365 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:38532.service: Deactivated successfully. Mar 12 00:47:03.429977 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 00:47:03.431065 systemd-logind[1701]: Removed session 11. Mar 12 00:47:08.510883 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:38534.service - OpenSSH per-connection server daemon (10.200.16.10:38534). Mar 12 00:47:08.978394 sshd[6579]: Accepted publickey for core from 10.200.16.10 port 38534 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:08.980061 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:08.983880 systemd-logind[1701]: New session 12 of user core. Mar 12 00:47:08.995528 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 00:47:09.380200 sshd[6579]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:09.384326 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:38534.service: Deactivated successfully. Mar 12 00:47:09.387159 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 00:47:09.387978 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Mar 12 00:47:09.389352 systemd-logind[1701]: Removed session 12. Mar 12 00:47:14.472145 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:44150.service - OpenSSH per-connection server daemon (10.200.16.10:44150). Mar 12 00:47:14.964239 sshd[6613]: Accepted publickey for core from 10.200.16.10 port 44150 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:14.965151 sshd[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:14.969290 systemd-logind[1701]: New session 13 of user core. Mar 12 00:47:14.974701 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 00:47:15.374070 sshd[6613]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:15.376904 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Mar 12 00:47:15.377185 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:44150.service: Deactivated successfully. Mar 12 00:47:15.379189 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 00:47:15.381864 systemd-logind[1701]: Removed session 13. Mar 12 00:47:20.466866 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:60690.service - OpenSSH per-connection server daemon (10.200.16.10:60690). Mar 12 00:47:20.962824 sshd[6648]: Accepted publickey for core from 10.200.16.10 port 60690 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:20.963706 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:20.967595 systemd-logind[1701]: New session 14 of user core. Mar 12 00:47:20.979531 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 00:47:21.376578 sshd[6648]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:21.380771 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:60690.service: Deactivated successfully. Mar 12 00:47:21.382596 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 00:47:21.383317 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Mar 12 00:47:21.384337 systemd-logind[1701]: Removed session 14. Mar 12 00:47:21.461635 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:60704.service - OpenSSH per-connection server daemon (10.200.16.10:60704). Mar 12 00:47:21.957407 sshd[6661]: Accepted publickey for core from 10.200.16.10 port 60704 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:21.958431 sshd[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:21.962114 systemd-logind[1701]: New session 15 of user core. Mar 12 00:47:21.968516 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 00:47:22.403112 sshd[6661]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:22.407106 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:60704.service: Deactivated successfully. Mar 12 00:47:22.410113 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 00:47:22.411418 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Mar 12 00:47:22.412765 systemd-logind[1701]: Removed session 15. Mar 12 00:47:22.501634 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:60716.service - OpenSSH per-connection server daemon (10.200.16.10:60716). Mar 12 00:47:22.988169 sshd[6672]: Accepted publickey for core from 10.200.16.10 port 60716 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:22.989527 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:22.993743 systemd-logind[1701]: New session 16 of user core. Mar 12 00:47:23.002525 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 00:47:23.397676 sshd[6672]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:23.403292 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:60716.service: Deactivated successfully. Mar 12 00:47:23.405671 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 00:47:23.406974 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Mar 12 00:47:23.407910 systemd-logind[1701]: Removed session 16. Mar 12 00:47:28.490618 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:60726.service - OpenSSH per-connection server daemon (10.200.16.10:60726). Mar 12 00:47:28.982113 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 60726 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:28.983064 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:28.987424 systemd-logind[1701]: New session 17 of user core. Mar 12 00:47:28.993520 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 00:47:29.394696 sshd[6725]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:29.398738 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:60726.service: Deactivated successfully. Mar 12 00:47:29.400654 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 00:47:29.403507 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Mar 12 00:47:29.405283 systemd-logind[1701]: Removed session 17. Mar 12 00:47:29.481917 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:60728.service - OpenSSH per-connection server daemon (10.200.16.10:60728). Mar 12 00:47:29.943404 sshd[6739]: Accepted publickey for core from 10.200.16.10 port 60728 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:29.944574 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:29.948423 systemd-logind[1701]: New session 18 of user core. Mar 12 00:47:29.955521 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 00:47:30.493120 sshd[6739]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:30.496816 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:60728.service: Deactivated successfully. Mar 12 00:47:30.498830 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 00:47:30.499803 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Mar 12 00:47:30.501139 systemd-logind[1701]: Removed session 18. Mar 12 00:47:30.582998 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:49444.service - OpenSSH per-connection server daemon (10.200.16.10:49444). Mar 12 00:47:31.078618 sshd[6750]: Accepted publickey for core from 10.200.16.10 port 49444 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:31.080323 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:31.085821 systemd-logind[1701]: New session 19 of user core. Mar 12 00:47:31.094611 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 00:47:31.947820 sshd[6750]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:31.951673 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Mar 12 00:47:31.951867 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:49444.service: Deactivated successfully. Mar 12 00:47:31.953570 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 00:47:31.955278 systemd-logind[1701]: Removed session 19. Mar 12 00:47:32.035403 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:49460.service - OpenSSH per-connection server daemon (10.200.16.10:49460). Mar 12 00:47:32.532495 sshd[6782]: Accepted publickey for core from 10.200.16.10 port 49460 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:32.534075 sshd[6782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:32.537637 systemd-logind[1701]: New session 20 of user core. Mar 12 00:47:32.543565 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 00:47:33.057244 sshd[6782]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:33.061687 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:49460.service: Deactivated successfully. Mar 12 00:47:33.064109 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 00:47:33.065872 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Mar 12 00:47:33.066815 systemd-logind[1701]: Removed session 20. Mar 12 00:47:33.145794 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:49462.service - OpenSSH per-connection server daemon (10.200.16.10:49462). Mar 12 00:47:33.635401 sshd[6792]: Accepted publickey for core from 10.200.16.10 port 49462 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:33.636330 sshd[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:33.640962 systemd-logind[1701]: New session 21 of user core. Mar 12 00:47:33.644507 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 00:47:34.040755 sshd[6792]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:34.044669 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:49462.service: Deactivated successfully. Mar 12 00:47:34.046625 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 00:47:34.047786 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Mar 12 00:47:34.048754 systemd-logind[1701]: Removed session 21. Mar 12 00:47:38.679207 systemd[1]: run-containerd-runc-k8s.io-27a475cc2d6a0483f4457e91e7e3fdc142f5ca053f27775842704fd51478fdab-runc.LJ7w6j.mount: Deactivated successfully. Mar 12 00:47:39.135597 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:49478.service - OpenSSH per-connection server daemon (10.200.16.10:49478). Mar 12 00:47:39.622401 sshd[6865]: Accepted publickey for core from 10.200.16.10 port 49478 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:39.623497 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:39.627550 systemd-logind[1701]: New session 22 of user core. Mar 12 00:47:39.638535 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 00:47:40.036031 sshd[6865]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:40.039960 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Mar 12 00:47:40.040632 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:49478.service: Deactivated successfully. Mar 12 00:47:40.042891 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 00:47:40.044428 systemd-logind[1701]: Removed session 22. Mar 12 00:47:45.120718 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:59746.service - OpenSSH per-connection server daemon (10.200.16.10:59746). Mar 12 00:47:45.581810 sshd[6879]: Accepted publickey for core from 10.200.16.10 port 59746 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:45.583248 sshd[6879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:45.588083 systemd-logind[1701]: New session 23 of user core. Mar 12 00:47:45.593548 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 00:47:45.967626 sshd[6879]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:45.971027 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:59746.service: Deactivated successfully. Mar 12 00:47:45.973397 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 00:47:45.974363 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Mar 12 00:47:45.975444 systemd-logind[1701]: Removed session 23. Mar 12 00:47:51.065987 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:58800.service - OpenSSH per-connection server daemon (10.200.16.10:58800). Mar 12 00:47:51.553420 sshd[6894]: Accepted publickey for core from 10.200.16.10 port 58800 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:51.554792 sshd[6894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:51.559599 systemd-logind[1701]: New session 24 of user core. Mar 12 00:47:51.565595 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 00:47:51.988547 sshd[6894]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:51.992576 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:58800.service: Deactivated successfully. Mar 12 00:47:51.994799 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 00:47:51.995662 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Mar 12 00:47:51.996751 systemd-logind[1701]: Removed session 24. Mar 12 00:47:57.089945 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:58804.service - OpenSSH per-connection server daemon (10.200.16.10:58804). Mar 12 00:47:57.574898 sshd[6927]: Accepted publickey for core from 10.200.16.10 port 58804 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:47:57.576321 sshd[6927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:47:57.579988 systemd-logind[1701]: New session 25 of user core. Mar 12 00:47:57.587547 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 00:47:57.986000 sshd[6927]: pam_unix(sshd:session): session closed for user core Mar 12 00:47:57.989588 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:58804.service: Deactivated successfully. Mar 12 00:47:57.991892 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 00:47:57.994008 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Mar 12 00:47:57.995037 systemd-logind[1701]: Removed session 25. Mar 12 00:48:03.074929 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:59758.service - OpenSSH per-connection server daemon (10.200.16.10:59758). Mar 12 00:48:03.537411 sshd[6940]: Accepted publickey for core from 10.200.16.10 port 59758 ssh2: RSA SHA256:bvU35A80s0VuxROJMNNQrx8uj2qWF7geEg4wypqva8o Mar 12 00:48:03.538625 sshd[6940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 00:48:03.542477 systemd-logind[1701]: New session 26 of user core. Mar 12 00:48:03.549525 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 00:48:03.927803 sshd[6940]: pam_unix(sshd:session): session closed for user core Mar 12 00:48:03.930929 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:59758.service: Deactivated successfully. Mar 12 00:48:03.934059 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 00:48:03.934988 systemd-logind[1701]: Session 26 logged out. Waiting for processes to exit. Mar 12 00:48:03.937654 systemd-logind[1701]: Removed session 26. Mar 12 00:48:04.992194 update_engine[1706]: I20260312 00:48:04.991451 1706 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 12 00:48:04.992194 update_engine[1706]: I20260312 00:48:04.991513 1706 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 12 00:48:04.992194 update_engine[1706]: I20260312 00:48:04.991741 1706 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 12 00:48:04.992194 update_engine[1706]: I20260312 00:48:04.992106 1706 omaha_request_params.cc:62] Current group set to lts Mar 12 00:48:04.993272 update_engine[1706]: I20260312 00:48:04.993245 1706 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 12 00:48:04.993355 update_engine[1706]: I20260312 00:48:04.993340 1706 update_attempter.cc:643] Scheduling an action processor start. Mar 12 00:48:04.993436 update_engine[1706]: I20260312 00:48:04.993420 1706 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 12 00:48:04.993529 locksmithd[1762]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 12 00:48:04.994399 update_engine[1706]: I20260312 00:48:04.994266 1706 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 12 00:48:04.994399 update_engine[1706]: I20260312 00:48:04.994339 1706 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 12 00:48:04.994399 update_engine[1706]: I20260312 00:48:04.994347 1706 omaha_request_action.cc:272] Request: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: Mar 12 00:48:04.994399 update_engine[1706]: I20260312 00:48:04.994354 1706 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 12 00:48:04.997554 update_engine[1706]: I20260312 00:48:04.997526 1706 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 12 00:48:04.997841 update_engine[1706]: I20260312 00:48:04.997813 1706 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 12 00:48:05.037895 update_engine[1706]: E20260312 00:48:05.037835 1706 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 12 00:48:05.038016 update_engine[1706]: I20260312 00:48:05.037932 1706 libcurl_http_fetcher.cc:283] No HTTP response, retry 1