Mar 4 00:47:49.201259 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:47:49.201281 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:47:49.201290 kernel: KASLR enabled Mar 4 00:47:49.201295 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:47:49.201303 kernel: printk: bootconsole [pl11] enabled Mar 4 00:47:49.201308 kernel: efi: EFI v2.7 by EDK II Mar 4 00:47:49.201315 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:47:49.201321 kernel: random: crng init done Mar 4 00:47:49.201327 kernel: ACPI: Early table checksum verification disabled Mar 4 00:47:49.201333 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:47:49.201339 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201345 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201353 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:47:49.201359 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201367 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201373 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201380 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201388 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201394 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201400 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:47:49.201407 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201413 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:47:49.201419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:47:49.201426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:47:49.201432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:47:49.201439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:47:49.201445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:47:49.201451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:47:49.201459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:47:49.201465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:47:49.201472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:47:49.201478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:47:49.201484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:47:49.201491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:47:49.201497 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:47:49.201503 kernel: Zone ranges: Mar 4 00:47:49.201509 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:47:49.201516 kernel: DMA32 empty Mar 4 00:47:49.201522 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:47:49.201529 kernel: Movable zone start for each node Mar 4 00:47:49.201539 kernel: Early memory node ranges Mar 4 00:47:49.201546 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:47:49.201552 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:47:49.201559 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:47:49.201566 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:47:49.201574 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:47:49.201581 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:47:49.201588 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:47:49.201595 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:47:49.201601 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:47:49.201608 kernel: psci: probing for conduit method from ACPI. Mar 4 00:47:49.201615 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:47:49.201621 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:47:49.201628 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:47:49.201635 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:47:49.201641 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:47:49.201648 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:47:49.201656 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:47:49.201663 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:47:49.201670 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:47:49.201677 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:47:49.203722 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:47:49.203738 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:47:49.203745 kernel: CPU features: detected: Spectre-BHB Mar 4 00:47:49.203753 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:47:49.203760 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:47:49.203767 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:47:49.203774 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:47:49.203785 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:47:49.203792 kernel: alternatives: applying boot alternatives Mar 4 00:47:49.203800 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:47:49.203808 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:47:49.203815 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:47:49.203822 kernel: Fallback order for Node 0: 0 Mar 4 00:47:49.203829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:47:49.203835 kernel: Policy zone: Normal Mar 4 00:47:49.203842 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:47:49.203849 kernel: software IO TLB: area num 2. Mar 4 00:47:49.203856 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:47:49.203865 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:47:49.203872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:47:49.203879 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:47:49.203886 kernel: rcu: RCU event tracing is enabled. Mar 4 00:47:49.203893 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:47:49.203900 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:47:49.203907 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:47:49.203914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:47:49.203921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:47:49.203927 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:47:49.203934 kernel: GICv3: 960 SPIs implemented Mar 4 00:47:49.203942 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:47:49.203949 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:47:49.203956 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:47:49.203963 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:47:49.203970 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:47:49.203977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:47:49.203984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:47:49.203991 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:47:49.203998 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:47:49.204005 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:47:49.204012 kernel: Console: colour dummy device 80x25 Mar 4 00:47:49.204021 kernel: printk: console [tty1] enabled Mar 4 00:47:49.204028 kernel: ACPI: Core revision 20230628 Mar 4 00:47:49.204035 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:47:49.204043 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:47:49.204050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:47:49.204057 kernel: landlock: Up and running. Mar 4 00:47:49.204063 kernel: SELinux: Initializing. Mar 4 00:47:49.204070 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204077 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204086 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:47:49.204093 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:47:49.204101 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:47:49.204108 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:47:49.204114 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:47:49.204121 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:47:49.204128 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:47:49.204136 kernel: Remapping and enabling EFI services. Mar 4 00:47:49.204149 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:47:49.204156 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:47:49.204164 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:47:49.204171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:47:49.204180 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:47:49.204187 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:47:49.204195 kernel: SMP: Total of 2 processors activated. Mar 4 00:47:49.204202 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:47:49.204210 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:47:49.204219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:47:49.204226 kernel: CPU features: detected: CRC32 instructions Mar 4 00:47:49.204233 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:47:49.204241 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:47:49.204248 kernel: CPU features: detected: Privileged Access Never Mar 4 00:47:49.204255 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:47:49.204263 kernel: alternatives: applying system-wide alternatives Mar 4 00:47:49.204270 kernel: devtmpfs: initialized Mar 4 00:47:49.204277 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:47:49.204286 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:47:49.204294 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:47:49.204301 kernel: SMBIOS 3.1.0 present. Mar 4 00:47:49.204309 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:47:49.204316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:47:49.204323 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:47:49.204331 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:47:49.204338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:47:49.204346 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:47:49.204355 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:47:49.204362 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:47:49.204369 kernel: cpuidle: using governor menu Mar 4 00:47:49.204377 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:47:49.204384 kernel: ASID allocator initialised with 32768 entries Mar 4 00:47:49.204391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:47:49.204399 kernel: Serial: AMBA PL011 UART driver Mar 4 00:47:49.204407 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:47:49.204414 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:47:49.204423 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:47:49.204430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:47:49.204438 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:47:49.204445 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:47:49.204452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:47:49.204460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:47:49.204467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:47:49.204475 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:47:49.204482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:47:49.204491 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:47:49.204498 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:47:49.204506 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:47:49.204513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:47:49.204520 kernel: ACPI: Interpreter enabled Mar 4 00:47:49.204527 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:47:49.204535 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:47:49.204542 kernel: printk: console [ttyAMA0] enabled Mar 4 00:47:49.204549 kernel: printk: bootconsole [pl11] disabled Mar 4 00:47:49.204558 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:47:49.204566 kernel: iommu: Default domain type: Translated Mar 4 00:47:49.204573 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:47:49.204580 kernel: efivars: Registered efivars operations Mar 4 00:47:49.204587 kernel: vgaarb: loaded Mar 4 00:47:49.204595 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:47:49.204602 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:47:49.204609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:47:49.204616 kernel: pnp: PnP ACPI init Mar 4 00:47:49.204625 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:47:49.204632 kernel: NET: Registered PF_INET protocol family Mar 4 00:47:49.204640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:47:49.204647 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:47:49.204655 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:47:49.204662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:47:49.204669 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:47:49.204677 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:47:49.204695 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204706 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:47:49.204721 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:47:49.204729 kernel: kvm [1]: HYP mode not available Mar 4 00:47:49.204736 kernel: Initialise system trusted keyrings Mar 4 00:47:49.204744 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:47:49.204751 kernel: Key type asymmetric registered Mar 4 00:47:49.204758 kernel: Asymmetric key parser 'x509' registered Mar 4 00:47:49.204766 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:47:49.204774 kernel: io scheduler mq-deadline registered Mar 4 00:47:49.204782 kernel: io scheduler kyber registered Mar 4 00:47:49.204789 kernel: io scheduler bfq registered Mar 4 00:47:49.204797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:47:49.204804 kernel: thunder_xcv, ver 1.0 Mar 4 00:47:49.204811 kernel: thunder_bgx, ver 1.0 Mar 4 00:47:49.204819 kernel: nicpf, ver 1.0 Mar 4 00:47:49.204826 kernel: nicvf, ver 1.0 Mar 4 00:47:49.204966 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:47:49.205044 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:47:48 UTC (1772585268) Mar 4 00:47:49.205055 kernel: efifb: probing for efifb Mar 4 00:47:49.205063 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:47:49.205070 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:47:49.205078 kernel: efifb: scrolling: redraw Mar 4 00:47:49.205085 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:47:49.205092 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:47:49.205100 kernel: fb0: EFI VGA frame buffer device Mar 4 00:47:49.205110 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:47:49.205118 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:47:49.205125 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:47:49.205133 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:47:49.205140 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:47:49.205147 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:47:49.205154 kernel: Segment Routing with IPv6 Mar 4 00:47:49.205162 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:47:49.205169 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:47:49.205178 kernel: Key type dns_resolver registered Mar 4 00:47:49.205185 kernel: registered taskstats version 1 Mar 4 00:47:49.205192 kernel: Loading compiled-in X.509 certificates Mar 4 00:47:49.205199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:47:49.205207 kernel: Key type .fscrypt registered Mar 4 00:47:49.205214 kernel: Key type fscrypt-provisioning registered Mar 4 00:47:49.205221 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:47:49.205228 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:47:49.205235 kernel: ima: No architecture policies found Mar 4 00:47:49.205244 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:47:49.205252 kernel: clk: Disabling unused clocks Mar 4 00:47:49.205259 kernel: Freeing unused kernel memory: 39424K Mar 4 00:47:49.205266 kernel: Run /init as init process Mar 4 00:47:49.205274 kernel: with arguments: Mar 4 00:47:49.205281 kernel: /init Mar 4 00:47:49.205288 kernel: with environment: Mar 4 00:47:49.205296 kernel: HOME=/ Mar 4 00:47:49.205303 kernel: TERM=linux Mar 4 00:47:49.205313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:47:49.205324 systemd[1]: Detected virtualization microsoft. Mar 4 00:47:49.205332 systemd[1]: Detected architecture arm64. Mar 4 00:47:49.205339 systemd[1]: Running in initrd. Mar 4 00:47:49.205347 systemd[1]: No hostname configured, using default hostname. Mar 4 00:47:49.205354 systemd[1]: Hostname set to . Mar 4 00:47:49.205362 systemd[1]: Initializing machine ID from random generator. Mar 4 00:47:49.205372 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:47:49.205380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:47:49.205388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:47:49.205397 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:47:49.205405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:47:49.205413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:47:49.205421 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:47:49.205430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:47:49.205440 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:47:49.205448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:47:49.205456 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:47:49.205464 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:47:49.205472 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:47:49.205480 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:47:49.205488 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:47:49.205495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:47:49.205505 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:47:49.205513 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:47:49.205521 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:47:49.205529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:47:49.205537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:47:49.205545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:47:49.205553 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:47:49.205561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:47:49.205570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:47:49.205578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:47:49.205586 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:47:49.205594 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:47:49.205602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:47:49.205627 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:47:49.205648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:49.205657 systemd-journald[217]: Journal started Mar 4 00:47:49.205675 systemd-journald[217]: Runtime Journal (/run/log/journal/ef96da87d77e4e7091dca3b3c398e95f) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:47:49.211790 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:47:49.226745 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:47:49.235700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:47:49.236005 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:47:49.246402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:47:49.261359 kernel: Bridge firewalling registered Mar 4 00:47:49.249751 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:47:49.256222 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:47:49.265138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:47:49.273953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:49.292978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:49.300024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:47:49.316885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:47:49.333507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:47:49.351427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:49.357619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:47:49.362601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:47:49.375700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:47:49.397935 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:47:49.405854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:47:49.424557 dracut-cmdline[251]: dracut-dracut-053 Mar 4 00:47:49.433997 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:47:49.460035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:47:49.474357 systemd-resolved[252]: Positive Trust Anchors: Mar 4 00:47:49.474370 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:47:49.474403 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:47:49.476579 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 4 00:47:49.478158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:47:49.486132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:47:49.525813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:47:49.600702 kernel: SCSI subsystem initialized Mar 4 00:47:49.608695 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:47:49.617708 kernel: iscsi: registered transport (tcp) Mar 4 00:47:49.634637 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:47:49.634705 kernel: QLogic iSCSI HBA Driver Mar 4 00:47:49.673832 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:47:49.692060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:47:49.716670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:47:49.716703 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:47:49.721925 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:47:49.769699 kernel: raid6: neonx8 gen() 15810 MB/s Mar 4 00:47:49.788710 kernel: raid6: neonx4 gen() 15695 MB/s Mar 4 00:47:49.807715 kernel: raid6: neonx2 gen() 13261 MB/s Mar 4 00:47:49.827692 kernel: raid6: neonx1 gen() 10498 MB/s Mar 4 00:47:49.846691 kernel: raid6: int64x8 gen() 6978 MB/s Mar 4 00:47:49.865694 kernel: raid6: int64x4 gen() 7369 MB/s Mar 4 00:47:49.885695 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 00:47:49.907971 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 00:47:49.907981 kernel: raid6: using algorithm neonx8 gen() 15810 MB/s Mar 4 00:47:49.930756 kernel: raid6: .... xor() 12036 MB/s, rmw enabled Mar 4 00:47:49.930775 kernel: raid6: using neon recovery algorithm Mar 4 00:47:49.942243 kernel: xor: measuring software checksum speed Mar 4 00:47:49.942272 kernel: 8regs : 19745 MB/sec Mar 4 00:47:49.945194 kernel: 32regs : 19636 MB/sec Mar 4 00:47:49.948024 kernel: arm64_neon : 26998 MB/sec Mar 4 00:47:49.951371 kernel: xor: using function: arm64_neon (26998 MB/sec) Mar 4 00:47:50.001974 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:47:50.012039 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:47:50.025861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:47:50.047228 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 4 00:47:50.051700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:47:50.067815 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:47:50.088845 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 4 00:47:50.117721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:47:50.133893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:47:50.170157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:47:50.186870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:47:50.207232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:47:50.216521 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:47:50.236446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:47:50.247510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:47:50.263912 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:47:50.285710 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:47:50.287523 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:47:50.311491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:47:50.341433 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:47:50.341455 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:47:50.341464 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:47:50.341474 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:47:50.311639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:50.356919 kernel: PTP clock support registered Mar 4 00:47:50.356943 kernel: scsi host0: storvsc_host_t Mar 4 00:47:50.356983 kernel: scsi host1: storvsc_host_t Mar 4 00:47:50.322799 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:50.396634 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 00:47:50.396660 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:47:50.396851 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:47:50.396871 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:47:50.336166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:50.336376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.426911 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:47:50.426936 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:47:50.426946 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 00:47:50.426956 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:47:50.349987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.649231 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:47:50.649254 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:47:50.649264 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:47:50.649280 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:47:50.389882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.409759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:50.409863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.680713 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Mar 4 00:47:50.680875 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:47:50.642757 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 4 00:47:50.689901 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: VF slot 1 added Mar 4 00:47:50.661191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.709080 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:47:50.709270 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:47:50.712539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.748726 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:47:50.748900 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:47:50.748994 kernel: hv_pci ff7ac347-70c7-472c-9b65-526e03a43b20: PCI VMBus probing: Using version 0x10004 Mar 4 00:47:50.749090 kernel: sd 1:0:0:0: [sda] Write Protect is off Mar 4 00:47:50.749174 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:47:50.749257 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:47:50.749338 kernel: hv_pci ff7ac347-70c7-472c-9b65-526e03a43b20: PCI host bridge to bus 70c7:00 Mar 4 00:47:50.745221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:50.796927 kernel: pci_bus 70c7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:47:50.797090 kernel: pci_bus 70c7:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:47:50.797170 kernel: pci 70c7:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:47:50.797194 kernel: pci 70c7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:47:50.797208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:47:50.797297 kernel: pci 70c7:00:02.0: enabling Extended Tags Mar 4 00:47:50.797315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:50.800807 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Mar 4 00:47:50.800973 kernel: pci 70c7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 70c7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:47:50.816702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:50.842106 kernel: pci_bus 70c7:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:47:50.842263 kernel: pci 70c7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:47:50.842375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:47:50.891578 kernel: mlx5_core 70c7:00:02.0: enabling device (0000 -> 0002) Mar 4 00:47:50.896574 kernel: mlx5_core 70c7:00:02.0: firmware version: 16.30.5026 Mar 4 00:47:51.092774 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: VF registering: eth1 Mar 4 00:47:51.092966 kernel: mlx5_core 70c7:00:02.0 eth1: joined to eth0 Mar 4 00:47:51.098396 kernel: mlx5_core 70c7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:47:51.110599 kernel: mlx5_core 70c7:00:02.0 enP28871s1: renamed from eth1 Mar 4 00:47:51.415018 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:47:51.451602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:47:51.476764 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (502) Mar 4 00:47:51.476825 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (504) Mar 4 00:47:51.486189 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:47:51.499414 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:47:51.510428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:47:51.523754 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:47:51.546620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:51.555608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:51.564581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:52.567399 disk-uuid[605]: The operation has completed successfully. Mar 4 00:47:52.573391 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:52.626375 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:47:52.626465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:47:52.658684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:47:52.669580 sh[718]: Success Mar 4 00:47:52.706582 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:47:52.978556 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:47:52.988661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:47:52.997663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:47:53.032384 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:47:53.032451 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:53.038004 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:47:53.042130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:47:53.045507 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:47:53.405839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:47:53.410403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:47:53.430717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:47:53.440770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:47:53.468946 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:53.469008 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:53.472911 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:53.514425 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:53.522840 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:47:53.532764 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:53.541282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:47:53.559073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:47:53.569208 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:47:53.581044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:47:53.616985 systemd-networkd[906]: lo: Link UP Mar 4 00:47:53.616994 systemd-networkd[906]: lo: Gained carrier Mar 4 00:47:53.618540 systemd-networkd[906]: Enumeration completed Mar 4 00:47:53.618639 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:47:53.627733 systemd[1]: Reached target network.target - Network. Mar 4 00:47:53.631249 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:47:53.631252 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:47:53.704572 kernel: mlx5_core 70c7:00:02.0 enP28871s1: Link up Mar 4 00:47:53.744573 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: Data path switched to VF: enP28871s1 Mar 4 00:47:53.745451 systemd-networkd[906]: enP28871s1: Link UP Mar 4 00:47:53.745556 systemd-networkd[906]: eth0: Link UP Mar 4 00:47:53.745709 systemd-networkd[906]: eth0: Gained carrier Mar 4 00:47:53.745719 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:47:53.756070 systemd-networkd[906]: enP28871s1: Gained carrier Mar 4 00:47:53.774610 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:47:54.632507 ignition[901]: Ignition 2.19.0 Mar 4 00:47:54.632518 ignition[901]: Stage: fetch-offline Mar 4 00:47:54.636603 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:47:54.632553 ignition[901]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.632574 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.632658 ignition[901]: parsed url from cmdline: "" Mar 4 00:47:54.632661 ignition[901]: no config URL provided Mar 4 00:47:54.632665 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:47:54.657817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:47:54.632672 ignition[901]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:47:54.632676 ignition[901]: failed to fetch config: resource requires networking Mar 4 00:47:54.632847 ignition[901]: Ignition finished successfully Mar 4 00:47:54.680271 ignition[916]: Ignition 2.19.0 Mar 4 00:47:54.680277 ignition[916]: Stage: fetch Mar 4 00:47:54.680501 ignition[916]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.680515 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.680769 ignition[916]: parsed url from cmdline: "" Mar 4 00:47:54.680773 ignition[916]: no config URL provided Mar 4 00:47:54.680779 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:47:54.680788 ignition[916]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:47:54.680813 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:47:54.827495 ignition[916]: GET result: OK Mar 4 00:47:54.827603 ignition[916]: config has been read from IMDS userdata Mar 4 00:47:54.827646 ignition[916]: parsing config with SHA512: 9f505b136e0ac65db09bb6d45993ae9411799447a00253fa0e7f1653c02fcc6ee797bfd329c92abbe5a8e8dcf7271f432ea5057d2fe942a8425eda52f6cd0c87 Mar 4 00:47:54.831387 unknown[916]: fetched base config from "system" Mar 4 00:47:54.831762 ignition[916]: fetch: fetch complete Mar 4 00:47:54.831394 unknown[916]: fetched base config from "system" Mar 4 00:47:54.831767 ignition[916]: fetch: fetch passed Mar 4 00:47:54.831399 unknown[916]: fetched user config from "azure" Mar 4 00:47:54.831807 ignition[916]: Ignition finished successfully Mar 4 00:47:54.836002 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:47:54.851790 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:47:54.874941 ignition[922]: Ignition 2.19.0 Mar 4 00:47:54.874949 ignition[922]: Stage: kargs Mar 4 00:47:54.878969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:47:54.875111 ignition[922]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.875124 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.876101 ignition[922]: kargs: kargs passed Mar 4 00:47:54.876151 ignition[922]: Ignition finished successfully Mar 4 00:47:54.904869 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:47:54.919032 ignition[928]: Ignition 2.19.0 Mar 4 00:47:54.919043 ignition[928]: Stage: disks Mar 4 00:47:54.924608 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:47:54.919207 ignition[928]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.931249 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:47:54.919219 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.936549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:47:54.920077 ignition[928]: disks: disks passed Mar 4 00:47:54.946666 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:47:54.920123 ignition[928]: Ignition finished successfully Mar 4 00:47:54.955162 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:47:54.964582 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:47:54.989843 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:47:55.102008 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:47:55.116495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:47:55.128230 systemd-networkd[906]: eth0: Gained IPv6LL Mar 4 00:47:55.130782 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:47:55.187580 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:47:55.187624 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:47:55.191642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:47:55.258628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:47:55.278574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Mar 4 00:47:55.289236 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:55.289277 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:55.292839 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:55.296699 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:47:55.303644 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:55.308613 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:47:55.320259 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:47:55.320290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:47:55.327216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:47:55.339831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:47:55.358837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:47:55.912846 coreos-metadata[965]: Mar 04 00:47:55.912 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:47:55.919958 coreos-metadata[965]: Mar 04 00:47:55.919 INFO Fetch successful Mar 4 00:47:55.919958 coreos-metadata[965]: Mar 04 00:47:55.919 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:47:55.935429 coreos-metadata[965]: Mar 04 00:47:55.935 INFO Fetch successful Mar 4 00:47:55.952500 coreos-metadata[965]: Mar 04 00:47:55.951 INFO wrote hostname ci-4081.3.6-n-32bda88c6e to /sysroot/etc/hostname Mar 4 00:47:55.955339 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:47:56.187276 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:47:56.227053 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:47:56.235157 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:47:56.240827 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:47:57.457269 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:47:57.469776 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:47:57.478164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:47:57.494159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:47:57.498343 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:57.523716 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:47:57.538088 ignition[1067]: INFO : Ignition 2.19.0 Mar 4 00:47:57.541621 ignition[1067]: INFO : Stage: mount Mar 4 00:47:57.541621 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:57.541621 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:57.559840 ignition[1067]: INFO : mount: mount passed Mar 4 00:47:57.559840 ignition[1067]: INFO : Ignition finished successfully Mar 4 00:47:57.545834 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:47:57.567760 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:47:57.579799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:47:57.614844 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Mar 4 00:47:57.614893 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:57.619788 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:57.623260 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:57.635526 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:57.631750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:47:57.656937 ignition[1095]: INFO : Ignition 2.19.0 Mar 4 00:47:57.660337 ignition[1095]: INFO : Stage: files Mar 4 00:47:57.660337 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:57.660337 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:57.660337 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:47:57.678244 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:47:57.678244 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:47:57.825103 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:47:57.825597 unknown[1095]: wrote ssh authorized keys file for user: core Mar 4 00:47:57.926943 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:47:58.074523 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 4 00:47:58.507438 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 00:47:58.816134 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.816134 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 00:47:58.832750 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: files passed Mar 4 00:47:58.841150 ignition[1095]: INFO : Ignition finished successfully Mar 4 00:47:58.835390 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:47:58.875372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:47:58.888724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:47:58.904766 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:47:58.904857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:47:58.940604 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.940604 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.955006 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.956116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:47:58.967602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:47:58.984983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:47:59.008498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:47:59.008625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:47:59.018686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:47:59.028290 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:47:59.037895 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:47:59.055694 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:47:59.072100 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:47:59.087860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:47:59.103399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:47:59.113920 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:47:59.119699 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:47:59.128930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:47:59.129046 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:47:59.142292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:47:59.147195 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:47:59.156148 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:47:59.165339 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:47:59.174446 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:47:59.183993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:47:59.193334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:47:59.203343 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:47:59.212368 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:47:59.222138 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:47:59.229956 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:47:59.230071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:47:59.241963 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:47:59.246933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:47:59.256306 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:47:59.260554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:47:59.266237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:47:59.266344 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:47:59.280282 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:47:59.280389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:47:59.286005 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:47:59.286092 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:47:59.294485 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:47:59.357090 ignition[1147]: INFO : Ignition 2.19.0 Mar 4 00:47:59.357090 ignition[1147]: INFO : Stage: umount Mar 4 00:47:59.357090 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:59.357090 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:59.357090 ignition[1147]: INFO : umount: umount passed Mar 4 00:47:59.357090 ignition[1147]: INFO : Ignition finished successfully Mar 4 00:47:59.294584 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:47:59.325868 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:47:59.341205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:47:59.351410 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:47:59.351681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:47:59.361587 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:47:59.362687 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:47:59.375829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:47:59.376454 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:47:59.377293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:47:59.386277 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:47:59.386383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:47:59.398952 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:47:59.399009 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:47:59.408458 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:47:59.408493 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:47:59.417804 systemd[1]: Stopped target network.target - Network. Mar 4 00:47:59.425849 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:47:59.425893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:47:59.435353 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:47:59.444622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:47:59.453580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:47:59.462913 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:47:59.470801 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:47:59.479013 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:47:59.479067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:47:59.487166 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:47:59.487204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:47:59.499782 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:47:59.499834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:47:59.508334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:47:59.508367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:47:59.516968 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:47:59.525591 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:47:59.534085 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:47:59.534166 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:47:59.538457 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 4 00:47:59.550309 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:47:59.550913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:47:59.560433 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:47:59.560613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:47:59.572131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:47:59.572191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:47:59.601843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:47:59.610052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:47:59.610133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:47:59.766263 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: Data path switched from VF: enP28871s1 Mar 4 00:47:59.619843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:47:59.619893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:47:59.628872 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:47:59.628914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:47:59.639714 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:47:59.639769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:47:59.651344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:47:59.665502 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:47:59.665991 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:47:59.684348 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:47:59.684483 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:47:59.693988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:47:59.694059 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:47:59.703296 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:47:59.703333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:47:59.713039 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:47:59.713092 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:47:59.726709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:47:59.726760 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:47:59.739821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:47:59.739871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:59.760579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:47:59.760631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:47:59.784856 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:47:59.797512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:47:59.797593 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:47:59.808992 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:47:59.809046 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:47:59.820167 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:47:59.820216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:47:59.831424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:59.831471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:59.841215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:47:59.841315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:47:59.850131 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:47:59.850231 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:47:59.859226 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:47:59.886522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:47:59.894066 systemd[1]: Switching root. Mar 4 00:48:00.035702 systemd-journald[217]: Journal stopped Mar 4 00:47:49.201259 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:47:49.201281 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:47:49.201290 kernel: KASLR enabled Mar 4 00:47:49.201295 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:47:49.201303 kernel: printk: bootconsole [pl11] enabled Mar 4 00:47:49.201308 kernel: efi: EFI v2.7 by EDK II Mar 4 00:47:49.201315 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:47:49.201321 kernel: random: crng init done Mar 4 00:47:49.201327 kernel: ACPI: Early table checksum verification disabled Mar 4 00:47:49.201333 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:47:49.201339 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201345 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201353 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:47:49.201359 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201367 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201373 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201380 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201388 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201394 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201400 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:47:49.201407 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:47:49.201413 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:47:49.201419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:47:49.201426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:47:49.201432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:47:49.201439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:47:49.201445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:47:49.201451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:47:49.201459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:47:49.201465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:47:49.201472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:47:49.201478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:47:49.201484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:47:49.201491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:47:49.201497 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:47:49.201503 kernel: Zone ranges: Mar 4 00:47:49.201509 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:47:49.201516 kernel: DMA32 empty Mar 4 00:47:49.201522 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:47:49.201529 kernel: Movable zone start for each node Mar 4 00:47:49.201539 kernel: Early memory node ranges Mar 4 00:47:49.201546 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:47:49.201552 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:47:49.201559 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:47:49.201566 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:47:49.201574 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:47:49.201581 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:47:49.201588 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:47:49.201595 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:47:49.201601 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:47:49.201608 kernel: psci: probing for conduit method from ACPI. Mar 4 00:47:49.201615 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:47:49.201621 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:47:49.201628 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:47:49.201635 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:47:49.201641 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:47:49.201648 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:47:49.201656 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:47:49.201663 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:47:49.201670 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:47:49.201677 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:47:49.203722 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:47:49.203738 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:47:49.203745 kernel: CPU features: detected: Spectre-BHB Mar 4 00:47:49.203753 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:47:49.203760 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:47:49.203767 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:47:49.203774 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:47:49.203785 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:47:49.203792 kernel: alternatives: applying boot alternatives Mar 4 00:47:49.203800 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:47:49.203808 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:47:49.203815 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:47:49.203822 kernel: Fallback order for Node 0: 0 Mar 4 00:47:49.203829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:47:49.203835 kernel: Policy zone: Normal Mar 4 00:47:49.203842 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:47:49.203849 kernel: software IO TLB: area num 2. Mar 4 00:47:49.203856 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:47:49.203865 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:47:49.203872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:47:49.203879 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:47:49.203886 kernel: rcu: RCU event tracing is enabled. Mar 4 00:47:49.203893 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:47:49.203900 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:47:49.203907 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:47:49.203914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:47:49.203921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:47:49.203927 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:47:49.203934 kernel: GICv3: 960 SPIs implemented Mar 4 00:47:49.203942 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:47:49.203949 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:47:49.203956 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:47:49.203963 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:47:49.203970 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:47:49.203977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:47:49.203984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:47:49.203991 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:47:49.203998 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:47:49.204005 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:47:49.204012 kernel: Console: colour dummy device 80x25 Mar 4 00:47:49.204021 kernel: printk: console [tty1] enabled Mar 4 00:47:49.204028 kernel: ACPI: Core revision 20230628 Mar 4 00:47:49.204035 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:47:49.204043 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:47:49.204050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:47:49.204057 kernel: landlock: Up and running. Mar 4 00:47:49.204063 kernel: SELinux: Initializing. Mar 4 00:47:49.204070 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204077 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204086 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:47:49.204093 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:47:49.204101 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:47:49.204108 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:47:49.204114 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:47:49.204121 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:47:49.204128 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:47:49.204136 kernel: Remapping and enabling EFI services. Mar 4 00:47:49.204149 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:47:49.204156 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:47:49.204164 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:47:49.204171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:47:49.204180 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:47:49.204187 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:47:49.204195 kernel: SMP: Total of 2 processors activated. Mar 4 00:47:49.204202 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:47:49.204210 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:47:49.204219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:47:49.204226 kernel: CPU features: detected: CRC32 instructions Mar 4 00:47:49.204233 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:47:49.204241 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:47:49.204248 kernel: CPU features: detected: Privileged Access Never Mar 4 00:47:49.204255 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:47:49.204263 kernel: alternatives: applying system-wide alternatives Mar 4 00:47:49.204270 kernel: devtmpfs: initialized Mar 4 00:47:49.204277 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:47:49.204286 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:47:49.204294 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:47:49.204301 kernel: SMBIOS 3.1.0 present. Mar 4 00:47:49.204309 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:47:49.204316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:47:49.204323 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:47:49.204331 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:47:49.204338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:47:49.204346 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:47:49.204355 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:47:49.204362 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:47:49.204369 kernel: cpuidle: using governor menu Mar 4 00:47:49.204377 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:47:49.204384 kernel: ASID allocator initialised with 32768 entries Mar 4 00:47:49.204391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:47:49.204399 kernel: Serial: AMBA PL011 UART driver Mar 4 00:47:49.204407 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:47:49.204414 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:47:49.204423 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:47:49.204430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:47:49.204438 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:47:49.204445 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:47:49.204452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:47:49.204460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:47:49.204467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:47:49.204475 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:47:49.204482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:47:49.204491 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:47:49.204498 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:47:49.204506 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:47:49.204513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:47:49.204520 kernel: ACPI: Interpreter enabled Mar 4 00:47:49.204527 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:47:49.204535 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:47:49.204542 kernel: printk: console [ttyAMA0] enabled Mar 4 00:47:49.204549 kernel: printk: bootconsole [pl11] disabled Mar 4 00:47:49.204558 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:47:49.204566 kernel: iommu: Default domain type: Translated Mar 4 00:47:49.204573 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:47:49.204580 kernel: efivars: Registered efivars operations Mar 4 00:47:49.204587 kernel: vgaarb: loaded Mar 4 00:47:49.204595 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:47:49.204602 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:47:49.204609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:47:49.204616 kernel: pnp: PnP ACPI init Mar 4 00:47:49.204625 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:47:49.204632 kernel: NET: Registered PF_INET protocol family Mar 4 00:47:49.204640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:47:49.204647 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:47:49.204655 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:47:49.204662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:47:49.204669 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:47:49.204677 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:47:49.204695 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204706 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:47:49.204714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:47:49.204721 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:47:49.204729 kernel: kvm [1]: HYP mode not available Mar 4 00:47:49.204736 kernel: Initialise system trusted keyrings Mar 4 00:47:49.204744 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:47:49.204751 kernel: Key type asymmetric registered Mar 4 00:47:49.204758 kernel: Asymmetric key parser 'x509' registered Mar 4 00:47:49.204766 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:47:49.204774 kernel: io scheduler mq-deadline registered Mar 4 00:47:49.204782 kernel: io scheduler kyber registered Mar 4 00:47:49.204789 kernel: io scheduler bfq registered Mar 4 00:47:49.204797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:47:49.204804 kernel: thunder_xcv, ver 1.0 Mar 4 00:47:49.204811 kernel: thunder_bgx, ver 1.0 Mar 4 00:47:49.204819 kernel: nicpf, ver 1.0 Mar 4 00:47:49.204826 kernel: nicvf, ver 1.0 Mar 4 00:47:49.204966 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:47:49.205044 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:47:48 UTC (1772585268) Mar 4 00:47:49.205055 kernel: efifb: probing for efifb Mar 4 00:47:49.205063 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:47:49.205070 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:47:49.205078 kernel: efifb: scrolling: redraw Mar 4 00:47:49.205085 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:47:49.205092 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:47:49.205100 kernel: fb0: EFI VGA frame buffer device Mar 4 00:47:49.205110 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:47:49.205118 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:47:49.205125 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:47:49.205133 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:47:49.205140 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:47:49.205147 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:47:49.205154 kernel: Segment Routing with IPv6 Mar 4 00:47:49.205162 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:47:49.205169 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:47:49.205178 kernel: Key type dns_resolver registered Mar 4 00:47:49.205185 kernel: registered taskstats version 1 Mar 4 00:47:49.205192 kernel: Loading compiled-in X.509 certificates Mar 4 00:47:49.205199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:47:49.205207 kernel: Key type .fscrypt registered Mar 4 00:47:49.205214 kernel: Key type fscrypt-provisioning registered Mar 4 00:47:49.205221 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:47:49.205228 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:47:49.205235 kernel: ima: No architecture policies found Mar 4 00:47:49.205244 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:47:49.205252 kernel: clk: Disabling unused clocks Mar 4 00:47:49.205259 kernel: Freeing unused kernel memory: 39424K Mar 4 00:47:49.205266 kernel: Run /init as init process Mar 4 00:47:49.205274 kernel: with arguments: Mar 4 00:47:49.205281 kernel: /init Mar 4 00:47:49.205288 kernel: with environment: Mar 4 00:47:49.205296 kernel: HOME=/ Mar 4 00:47:49.205303 kernel: TERM=linux Mar 4 00:47:49.205313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:47:49.205324 systemd[1]: Detected virtualization microsoft. Mar 4 00:47:49.205332 systemd[1]: Detected architecture arm64. Mar 4 00:47:49.205339 systemd[1]: Running in initrd. Mar 4 00:47:49.205347 systemd[1]: No hostname configured, using default hostname. Mar 4 00:47:49.205354 systemd[1]: Hostname set to . Mar 4 00:47:49.205362 systemd[1]: Initializing machine ID from random generator. Mar 4 00:47:49.205372 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:47:49.205380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:47:49.205388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:47:49.205397 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:47:49.205405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:47:49.205413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:47:49.205421 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:47:49.205430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:47:49.205440 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:47:49.205448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:47:49.205456 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:47:49.205464 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:47:49.205472 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:47:49.205480 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:47:49.205488 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:47:49.205495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:47:49.205505 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:47:49.205513 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:47:49.205521 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:47:49.205529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:47:49.205537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:47:49.205545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:47:49.205553 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:47:49.205561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:47:49.205570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:47:49.205578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:47:49.205586 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:47:49.205594 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:47:49.205602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:47:49.205627 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:47:49.205648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:49.205657 systemd-journald[217]: Journal started Mar 4 00:47:49.205675 systemd-journald[217]: Runtime Journal (/run/log/journal/ef96da87d77e4e7091dca3b3c398e95f) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:47:49.211790 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:47:49.226745 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:47:49.235700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:47:49.236005 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:47:49.246402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:47:49.261359 kernel: Bridge firewalling registered Mar 4 00:47:49.249751 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:47:49.256222 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:47:49.265138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:47:49.273953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:49.292978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:49.300024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:47:49.316885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:47:49.333507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:47:49.351427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:49.357619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:47:49.362601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:47:49.375700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:47:49.397935 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:47:49.405854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:47:49.424557 dracut-cmdline[251]: dracut-dracut-053 Mar 4 00:47:49.433997 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:47:49.460035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:47:49.474357 systemd-resolved[252]: Positive Trust Anchors: Mar 4 00:47:49.474370 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:47:49.474403 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:47:49.476579 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 4 00:47:49.478158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:47:49.486132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:47:49.525813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:47:49.600702 kernel: SCSI subsystem initialized Mar 4 00:47:49.608695 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:47:49.617708 kernel: iscsi: registered transport (tcp) Mar 4 00:47:49.634637 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:47:49.634705 kernel: QLogic iSCSI HBA Driver Mar 4 00:47:49.673832 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:47:49.692060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:47:49.716670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:47:49.716703 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:47:49.721925 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:47:49.769699 kernel: raid6: neonx8 gen() 15810 MB/s Mar 4 00:47:49.788710 kernel: raid6: neonx4 gen() 15695 MB/s Mar 4 00:47:49.807715 kernel: raid6: neonx2 gen() 13261 MB/s Mar 4 00:47:49.827692 kernel: raid6: neonx1 gen() 10498 MB/s Mar 4 00:47:49.846691 kernel: raid6: int64x8 gen() 6978 MB/s Mar 4 00:47:49.865694 kernel: raid6: int64x4 gen() 7369 MB/s Mar 4 00:47:49.885695 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 00:47:49.907971 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 00:47:49.907981 kernel: raid6: using algorithm neonx8 gen() 15810 MB/s Mar 4 00:47:49.930756 kernel: raid6: .... xor() 12036 MB/s, rmw enabled Mar 4 00:47:49.930775 kernel: raid6: using neon recovery algorithm Mar 4 00:47:49.942243 kernel: xor: measuring software checksum speed Mar 4 00:47:49.942272 kernel: 8regs : 19745 MB/sec Mar 4 00:47:49.945194 kernel: 32regs : 19636 MB/sec Mar 4 00:47:49.948024 kernel: arm64_neon : 26998 MB/sec Mar 4 00:47:49.951371 kernel: xor: using function: arm64_neon (26998 MB/sec) Mar 4 00:47:50.001974 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:47:50.012039 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:47:50.025861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:47:50.047228 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 4 00:47:50.051700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:47:50.067815 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:47:50.088845 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 4 00:47:50.117721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:47:50.133893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:47:50.170157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:47:50.186870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:47:50.207232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:47:50.216521 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:47:50.236446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:47:50.247510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:47:50.263912 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:47:50.285710 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:47:50.287523 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:47:50.311491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:47:50.341433 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:47:50.341455 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:47:50.341464 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:47:50.341474 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:47:50.311639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:50.356919 kernel: PTP clock support registered Mar 4 00:47:50.356943 kernel: scsi host0: storvsc_host_t Mar 4 00:47:50.356983 kernel: scsi host1: storvsc_host_t Mar 4 00:47:50.322799 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:50.396634 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 00:47:50.396660 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:47:50.396851 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:47:50.396871 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:47:50.336166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:50.336376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.426911 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:47:50.426936 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:47:50.426946 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 00:47:50.426956 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:47:50.349987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.649231 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:47:50.649254 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:47:50.649264 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:47:50.649280 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:47:50.389882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.409759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:50.409863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.680713 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Mar 4 00:47:50.680875 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:47:50.642757 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 4 00:47:50.689901 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: VF slot 1 added Mar 4 00:47:50.661191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:47:50.709080 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:47:50.709270 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:47:50.712539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:50.748726 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:47:50.748900 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:47:50.748994 kernel: hv_pci ff7ac347-70c7-472c-9b65-526e03a43b20: PCI VMBus probing: Using version 0x10004 Mar 4 00:47:50.749090 kernel: sd 1:0:0:0: [sda] Write Protect is off Mar 4 00:47:50.749174 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:47:50.749257 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:47:50.749338 kernel: hv_pci ff7ac347-70c7-472c-9b65-526e03a43b20: PCI host bridge to bus 70c7:00 Mar 4 00:47:50.745221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:47:50.796927 kernel: pci_bus 70c7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:47:50.797090 kernel: pci_bus 70c7:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:47:50.797170 kernel: pci 70c7:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:47:50.797194 kernel: pci 70c7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:47:50.797208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:47:50.797297 kernel: pci 70c7:00:02.0: enabling Extended Tags Mar 4 00:47:50.797315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:50.800807 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Mar 4 00:47:50.800973 kernel: pci 70c7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 70c7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:47:50.816702 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:50.842106 kernel: pci_bus 70c7:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:47:50.842263 kernel: pci 70c7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:47:50.842375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#310 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:47:50.891578 kernel: mlx5_core 70c7:00:02.0: enabling device (0000 -> 0002) Mar 4 00:47:50.896574 kernel: mlx5_core 70c7:00:02.0: firmware version: 16.30.5026 Mar 4 00:47:51.092774 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: VF registering: eth1 Mar 4 00:47:51.092966 kernel: mlx5_core 70c7:00:02.0 eth1: joined to eth0 Mar 4 00:47:51.098396 kernel: mlx5_core 70c7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:47:51.110599 kernel: mlx5_core 70c7:00:02.0 enP28871s1: renamed from eth1 Mar 4 00:47:51.415018 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:47:51.451602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:47:51.476764 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (502) Mar 4 00:47:51.476825 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (504) Mar 4 00:47:51.486189 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:47:51.499414 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:47:51.510428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:47:51.523754 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:47:51.546620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:51.555608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:51.564581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:52.567399 disk-uuid[605]: The operation has completed successfully. Mar 4 00:47:52.573391 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:47:52.626375 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:47:52.626465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:47:52.658684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:47:52.669580 sh[718]: Success Mar 4 00:47:52.706582 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:47:52.978556 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:47:52.988661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:47:52.997663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:47:53.032384 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:47:53.032451 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:53.038004 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:47:53.042130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:47:53.045507 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:47:53.405839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:47:53.410403 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:47:53.430717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:47:53.440770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:47:53.468946 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:53.469008 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:53.472911 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:53.514425 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:53.522840 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:47:53.532764 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:53.541282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:47:53.559073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:47:53.569208 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:47:53.581044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:47:53.616985 systemd-networkd[906]: lo: Link UP Mar 4 00:47:53.616994 systemd-networkd[906]: lo: Gained carrier Mar 4 00:47:53.618540 systemd-networkd[906]: Enumeration completed Mar 4 00:47:53.618639 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:47:53.627733 systemd[1]: Reached target network.target - Network. Mar 4 00:47:53.631249 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:47:53.631252 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:47:53.704572 kernel: mlx5_core 70c7:00:02.0 enP28871s1: Link up Mar 4 00:47:53.744573 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: Data path switched to VF: enP28871s1 Mar 4 00:47:53.745451 systemd-networkd[906]: enP28871s1: Link UP Mar 4 00:47:53.745556 systemd-networkd[906]: eth0: Link UP Mar 4 00:47:53.745709 systemd-networkd[906]: eth0: Gained carrier Mar 4 00:47:53.745719 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:47:53.756070 systemd-networkd[906]: enP28871s1: Gained carrier Mar 4 00:47:53.774610 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:47:54.632507 ignition[901]: Ignition 2.19.0 Mar 4 00:47:54.632518 ignition[901]: Stage: fetch-offline Mar 4 00:47:54.636603 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:47:54.632553 ignition[901]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.632574 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.632658 ignition[901]: parsed url from cmdline: "" Mar 4 00:47:54.632661 ignition[901]: no config URL provided Mar 4 00:47:54.632665 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:47:54.657817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:47:54.632672 ignition[901]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:47:54.632676 ignition[901]: failed to fetch config: resource requires networking Mar 4 00:47:54.632847 ignition[901]: Ignition finished successfully Mar 4 00:47:54.680271 ignition[916]: Ignition 2.19.0 Mar 4 00:47:54.680277 ignition[916]: Stage: fetch Mar 4 00:47:54.680501 ignition[916]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.680515 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.680769 ignition[916]: parsed url from cmdline: "" Mar 4 00:47:54.680773 ignition[916]: no config URL provided Mar 4 00:47:54.680779 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:47:54.680788 ignition[916]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:47:54.680813 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:47:54.827495 ignition[916]: GET result: OK Mar 4 00:47:54.827603 ignition[916]: config has been read from IMDS userdata Mar 4 00:47:54.827646 ignition[916]: parsing config with SHA512: 9f505b136e0ac65db09bb6d45993ae9411799447a00253fa0e7f1653c02fcc6ee797bfd329c92abbe5a8e8dcf7271f432ea5057d2fe942a8425eda52f6cd0c87 Mar 4 00:47:54.831387 unknown[916]: fetched base config from "system" Mar 4 00:47:54.831762 ignition[916]: fetch: fetch complete Mar 4 00:47:54.831394 unknown[916]: fetched base config from "system" Mar 4 00:47:54.831767 ignition[916]: fetch: fetch passed Mar 4 00:47:54.831399 unknown[916]: fetched user config from "azure" Mar 4 00:47:54.831807 ignition[916]: Ignition finished successfully Mar 4 00:47:54.836002 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:47:54.851790 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:47:54.874941 ignition[922]: Ignition 2.19.0 Mar 4 00:47:54.874949 ignition[922]: Stage: kargs Mar 4 00:47:54.878969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:47:54.875111 ignition[922]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.875124 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.876101 ignition[922]: kargs: kargs passed Mar 4 00:47:54.876151 ignition[922]: Ignition finished successfully Mar 4 00:47:54.904869 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:47:54.919032 ignition[928]: Ignition 2.19.0 Mar 4 00:47:54.919043 ignition[928]: Stage: disks Mar 4 00:47:54.924608 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:47:54.919207 ignition[928]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:54.931249 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:47:54.919219 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:54.936549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:47:54.920077 ignition[928]: disks: disks passed Mar 4 00:47:54.946666 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:47:54.920123 ignition[928]: Ignition finished successfully Mar 4 00:47:54.955162 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:47:54.964582 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:47:54.989843 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:47:55.102008 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:47:55.116495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:47:55.128230 systemd-networkd[906]: eth0: Gained IPv6LL Mar 4 00:47:55.130782 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:47:55.187580 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:47:55.187624 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:47:55.191642 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:47:55.258628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:47:55.278574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Mar 4 00:47:55.289236 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:55.289277 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:55.292839 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:55.296699 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:47:55.303644 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:55.308613 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:47:55.320259 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:47:55.320290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:47:55.327216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:47:55.339831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:47:55.358837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:47:55.912846 coreos-metadata[965]: Mar 04 00:47:55.912 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:47:55.919958 coreos-metadata[965]: Mar 04 00:47:55.919 INFO Fetch successful Mar 4 00:47:55.919958 coreos-metadata[965]: Mar 04 00:47:55.919 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:47:55.935429 coreos-metadata[965]: Mar 04 00:47:55.935 INFO Fetch successful Mar 4 00:47:55.952500 coreos-metadata[965]: Mar 04 00:47:55.951 INFO wrote hostname ci-4081.3.6-n-32bda88c6e to /sysroot/etc/hostname Mar 4 00:47:55.955339 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:47:56.187276 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:47:56.227053 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:47:56.235157 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:47:56.240827 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:47:57.457269 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:47:57.469776 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:47:57.478164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:47:57.494159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:47:57.498343 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:57.523716 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:47:57.538088 ignition[1067]: INFO : Ignition 2.19.0 Mar 4 00:47:57.541621 ignition[1067]: INFO : Stage: mount Mar 4 00:47:57.541621 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:57.541621 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:57.559840 ignition[1067]: INFO : mount: mount passed Mar 4 00:47:57.559840 ignition[1067]: INFO : Ignition finished successfully Mar 4 00:47:57.545834 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:47:57.567760 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:47:57.579799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:47:57.614844 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Mar 4 00:47:57.614893 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:47:57.619788 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:47:57.623260 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:47:57.635526 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:47:57.631750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:47:57.656937 ignition[1095]: INFO : Ignition 2.19.0 Mar 4 00:47:57.660337 ignition[1095]: INFO : Stage: files Mar 4 00:47:57.660337 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:57.660337 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:57.660337 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:47:57.678244 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:47:57.678244 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:47:57.825103 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:47:57.831109 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:47:57.825597 unknown[1095]: wrote ssh authorized keys file for user: core Mar 4 00:47:57.926943 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:47:58.074523 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.083128 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 4 00:47:58.507438 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 00:47:58.816134 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 4 00:47:58.816134 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 00:47:58.832750 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:47:58.841150 ignition[1095]: INFO : files: files passed Mar 4 00:47:58.841150 ignition[1095]: INFO : Ignition finished successfully Mar 4 00:47:58.835390 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:47:58.875372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:47:58.888724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:47:58.904766 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:47:58.904857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:47:58.940604 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.940604 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.955006 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:47:58.956116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:47:58.967602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:47:58.984983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:47:59.008498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:47:59.008625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:47:59.018686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:47:59.028290 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:47:59.037895 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:47:59.055694 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:47:59.072100 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:47:59.087860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:47:59.103399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:47:59.113920 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:47:59.119699 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:47:59.128930 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:47:59.129046 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:47:59.142292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:47:59.147195 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:47:59.156148 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:47:59.165339 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:47:59.174446 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:47:59.183993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:47:59.193334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:47:59.203343 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:47:59.212368 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:47:59.222138 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:47:59.229956 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:47:59.230071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:47:59.241963 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:47:59.246933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:47:59.256306 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:47:59.260554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:47:59.266237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:47:59.266344 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:47:59.280282 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:47:59.280389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:47:59.286005 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:47:59.286092 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:47:59.294485 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:47:59.357090 ignition[1147]: INFO : Ignition 2.19.0 Mar 4 00:47:59.357090 ignition[1147]: INFO : Stage: umount Mar 4 00:47:59.357090 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:47:59.357090 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:47:59.357090 ignition[1147]: INFO : umount: umount passed Mar 4 00:47:59.357090 ignition[1147]: INFO : Ignition finished successfully Mar 4 00:47:59.294584 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:47:59.325868 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:47:59.341205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:47:59.351410 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:47:59.351681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:47:59.361587 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:47:59.362687 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:47:59.375829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:47:59.376454 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:47:59.377293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:47:59.386277 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:47:59.386383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:47:59.398952 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:47:59.399009 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:47:59.408458 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:47:59.408493 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:47:59.417804 systemd[1]: Stopped target network.target - Network. Mar 4 00:47:59.425849 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:47:59.425893 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:47:59.435353 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:47:59.444622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:47:59.453580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:47:59.462913 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:47:59.470801 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:47:59.479013 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:47:59.479067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:47:59.487166 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:47:59.487204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:47:59.499782 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:47:59.499834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:47:59.508334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:47:59.508367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:47:59.516968 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:47:59.525591 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:47:59.534085 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:47:59.534166 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:47:59.538457 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 4 00:47:59.550309 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:47:59.550913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:47:59.560433 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:47:59.560613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:47:59.572131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:47:59.572191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:47:59.601843 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:47:59.610052 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:47:59.610133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:47:59.766263 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: Data path switched from VF: enP28871s1 Mar 4 00:47:59.619843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:47:59.619893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:47:59.628872 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:47:59.628914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:47:59.639714 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:47:59.639769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:47:59.651344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:47:59.665502 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:47:59.665991 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:47:59.684348 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:47:59.684483 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:47:59.693988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:47:59.694059 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:47:59.703296 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:47:59.703333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:47:59.713039 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:47:59.713092 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:47:59.726709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:47:59.726760 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:47:59.739821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:47:59.739871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:47:59.760579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:47:59.760631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:47:59.784856 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:47:59.797512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:47:59.797593 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:47:59.808992 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:47:59.809046 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:47:59.820167 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:47:59.820216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:47:59.831424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:47:59.831471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:47:59.841215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:47:59.841315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:47:59.850131 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:47:59.850231 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:47:59.859226 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:47:59.886522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:47:59.894066 systemd[1]: Switching root. Mar 4 00:48:00.035702 systemd-journald[217]: Journal stopped Mar 4 00:48:06.980572 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 4 00:48:06.980598 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 00:48:06.980609 kernel: SELinux: policy capability open_perms=1 Mar 4 00:48:06.980619 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 00:48:06.980626 kernel: SELinux: policy capability always_check_network=0 Mar 4 00:48:06.980634 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 00:48:06.980643 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 00:48:06.980651 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 00:48:06.980659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 00:48:06.980667 systemd[1]: Successfully loaded SELinux policy in 226.863ms. Mar 4 00:48:06.980678 kernel: audit: type=1403 audit(1772585281.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 00:48:06.980687 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.131ms. Mar 4 00:48:06.980697 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:48:06.980706 systemd[1]: Detected virtualization microsoft. Mar 4 00:48:06.980715 systemd[1]: Detected architecture arm64. Mar 4 00:48:06.980726 systemd[1]: Detected first boot. Mar 4 00:48:06.980735 systemd[1]: Hostname set to . Mar 4 00:48:06.980744 systemd[1]: Initializing machine ID from random generator. Mar 4 00:48:06.980754 zram_generator::config[1188]: No configuration found. Mar 4 00:48:06.980765 systemd[1]: Populated /etc with preset unit settings. Mar 4 00:48:06.980774 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 00:48:06.980785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 00:48:06.980794 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 00:48:06.980804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 00:48:06.980813 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 00:48:06.980822 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 00:48:06.980831 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 00:48:06.980841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 00:48:06.980851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 00:48:06.980861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 00:48:06.980870 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 00:48:06.980879 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:48:06.980888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:48:06.980898 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 00:48:06.980907 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 00:48:06.980916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 00:48:06.980926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:48:06.980936 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 4 00:48:06.980946 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:48:06.980955 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 00:48:06.980967 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 00:48:06.980977 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 00:48:06.980986 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 00:48:06.980995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:48:06.981006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:48:06.981016 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:48:06.981025 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:48:06.981034 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 00:48:06.981044 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 00:48:06.981053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:48:06.981063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:48:06.981074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:48:06.981084 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 00:48:06.981093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 00:48:06.981103 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 00:48:06.981112 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 00:48:06.981122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 00:48:06.981133 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 00:48:06.981143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 00:48:06.981153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 00:48:06.981163 systemd[1]: Reached target machines.target - Containers. Mar 4 00:48:06.981173 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 00:48:06.981183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:48:06.981193 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:48:06.981202 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 00:48:06.981213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:48:06.981223 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:48:06.981232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:48:06.981242 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 00:48:06.981251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:48:06.981261 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 00:48:06.981271 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 00:48:06.981281 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 00:48:06.981290 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 00:48:06.981301 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 00:48:06.981310 kernel: fuse: init (API version 7.39) Mar 4 00:48:06.981319 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:48:06.981329 kernel: loop: module loaded Mar 4 00:48:06.981338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:48:06.981347 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 00:48:06.981371 systemd-journald[1291]: Collecting audit messages is disabled. Mar 4 00:48:06.981392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 00:48:06.981403 systemd-journald[1291]: Journal started Mar 4 00:48:06.981424 systemd-journald[1291]: Runtime Journal (/run/log/journal/4e47dc8c09ba4d029416c6a49cb8f7b1) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:48:05.950098 systemd[1]: Queued start job for default target multi-user.target. Mar 4 00:48:06.215007 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 4 00:48:06.215351 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 00:48:06.215660 systemd[1]: systemd-journald.service: Consumed 2.503s CPU time. Mar 4 00:48:06.996598 kernel: ACPI: bus type drm_connector registered Mar 4 00:48:07.003575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:48:07.016123 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 00:48:07.016143 systemd[1]: Stopped verity-setup.service. Mar 4 00:48:07.030886 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:48:07.031779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 00:48:07.036926 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 00:48:07.042546 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 00:48:07.047162 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 00:48:07.052261 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 00:48:07.058013 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 00:48:07.062825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 00:48:07.068755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:48:07.075246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 00:48:07.075387 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 00:48:07.081243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:48:07.081384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:48:07.087452 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:48:07.087610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:48:07.092838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:48:07.092968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:48:07.099270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 00:48:07.099411 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 00:48:07.105808 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:48:07.105942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:48:07.111305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 00:48:07.124347 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 00:48:07.135676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 00:48:07.141615 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 00:48:07.146751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:48:07.150764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:48:07.157276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 00:48:07.162731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 00:48:07.168628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 00:48:07.176496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 00:48:07.176547 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:48:07.182459 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 00:48:07.195985 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 00:48:07.203727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 00:48:07.211047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:48:07.217792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 00:48:07.226751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 00:48:07.234505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:48:07.237771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 00:48:07.245732 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 00:48:07.251674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:48:07.257023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:48:07.266290 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 00:48:07.278722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:48:07.287730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 00:48:07.302653 udevadm[1328]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 00:48:07.392741 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Mar 4 00:48:07.392756 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Mar 4 00:48:07.399222 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:48:07.400167 systemd-journald[1291]: Time spent on flushing to /var/log/journal/4e47dc8c09ba4d029416c6a49cb8f7b1 is 12.475ms for 902 entries. Mar 4 00:48:07.400167 systemd-journald[1291]: System Journal (/var/log/journal/4e47dc8c09ba4d029416c6a49cb8f7b1) is 8.0M, max 2.6G, 2.6G free. Mar 4 00:48:08.543826 systemd-journald[1291]: Received client request to flush runtime journal. Mar 4 00:48:08.543884 kernel: loop0: detected capacity change from 0 to 114432 Mar 4 00:48:07.414699 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 00:48:07.834336 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 00:48:07.839923 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 00:48:07.849695 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 00:48:08.244536 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:48:08.545975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 00:48:09.734014 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 00:48:09.743714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:48:09.759371 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Mar 4 00:48:09.759383 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Mar 4 00:48:09.762515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:48:10.735582 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 00:48:10.769575 kernel: loop1: detected capacity change from 0 to 200864 Mar 4 00:48:10.956587 kernel: loop2: detected capacity change from 0 to 31320 Mar 4 00:48:11.511987 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 00:48:11.523775 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:48:11.542786 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Mar 4 00:48:12.853375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:48:12.871795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:48:12.898008 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 4 00:48:13.048699 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 00:48:13.460588 kernel: hv_vmbus: registering driver hv_balloon Mar 4 00:48:13.483270 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 4 00:48:13.483325 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 4 00:48:13.483366 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 00:48:13.468766 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 00:48:13.509875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:48:13.555970 kernel: hv_vmbus: registering driver hyperv_fb Mar 4 00:48:13.557639 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 4 00:48:13.557673 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 4 00:48:13.565709 kernel: Console: switching to colour dummy device 80x25 Mar 4 00:48:13.572276 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:48:13.575682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:48:13.575837 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:48:13.592756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:48:14.085677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:48:14.085929 kernel: mlx5_core 70c7:00:02.0 enP28871s1: Link up Mar 4 00:48:14.022333 systemd-networkd[1359]: lo: Link UP Mar 4 00:48:14.022337 systemd-networkd[1359]: lo: Gained carrier Mar 4 00:48:14.024219 systemd-networkd[1359]: Enumeration completed Mar 4 00:48:14.024313 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:48:14.024519 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:48:14.024522 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:48:14.034690 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 00:48:14.110697 kernel: hv_netvsc 000d3af5-e835-000d-3af5-e835000d3af5 eth0: Data path switched to VF: enP28871s1 Mar 4 00:48:14.110918 kernel: loop3: detected capacity change from 0 to 114328 Mar 4 00:48:14.110114 systemd-networkd[1359]: enP28871s1: Link UP Mar 4 00:48:14.110205 systemd-networkd[1359]: eth0: Link UP Mar 4 00:48:14.110208 systemd-networkd[1359]: eth0: Gained carrier Mar 4 00:48:14.110220 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:48:14.114068 systemd-networkd[1359]: enP28871s1: Gained carrier Mar 4 00:48:14.126616 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:48:14.131750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 00:48:14.132596 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 00:48:14.218057 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1364) Mar 4 00:48:14.250330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:48:14.266761 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 00:48:14.761317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 00:48:15.476731 systemd-networkd[1359]: eth0: Gained IPv6LL Mar 4 00:48:15.479451 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 00:48:15.740947 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 00:48:15.755744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 00:48:15.766068 kernel: loop4: detected capacity change from 0 to 114432 Mar 4 00:48:15.993576 kernel: loop5: detected capacity change from 0 to 200864 Mar 4 00:48:16.008593 kernel: loop6: detected capacity change from 0 to 31320 Mar 4 00:48:16.102233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:48:16.200999 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:48:16.237935 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 00:48:16.243838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:48:16.256701 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 00:48:16.260995 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:48:16.286998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 00:48:16.297643 kernel: loop7: detected capacity change from 0 to 114328 Mar 4 00:48:16.824754 (sd-merge)[1449]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 4 00:48:16.825182 (sd-merge)[1449]: Merged extensions into '/usr'. Mar 4 00:48:16.836833 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 00:48:16.836845 systemd[1]: Reloading... Mar 4 00:48:16.891956 zram_generator::config[1481]: No configuration found. Mar 4 00:48:17.188721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:48:17.262867 systemd[1]: Reloading finished in 425 ms. Mar 4 00:48:17.286969 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 00:48:17.299726 systemd[1]: Starting ensure-sysext.service... Mar 4 00:48:17.304764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:48:17.312716 systemd[1]: Reloading requested from client PID 1536 ('systemctl') (unit ensure-sysext.service)... Mar 4 00:48:17.312735 systemd[1]: Reloading... Mar 4 00:48:17.340003 systemd-tmpfiles[1537]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 00:48:17.340273 systemd-tmpfiles[1537]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 00:48:17.341995 systemd-tmpfiles[1537]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 00:48:17.342230 systemd-tmpfiles[1537]: ACLs are not supported, ignoring. Mar 4 00:48:17.342282 systemd-tmpfiles[1537]: ACLs are not supported, ignoring. Mar 4 00:48:17.346343 systemd-tmpfiles[1537]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:48:17.346355 systemd-tmpfiles[1537]: Skipping /boot Mar 4 00:48:17.359123 systemd-tmpfiles[1537]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:48:17.359140 systemd-tmpfiles[1537]: Skipping /boot Mar 4 00:48:17.390618 zram_generator::config[1562]: No configuration found. Mar 4 00:48:17.496982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:48:17.571596 systemd[1]: Reloading finished in 258 ms. Mar 4 00:48:17.597134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:48:17.608534 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:48:17.615823 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 00:48:17.626645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 00:48:17.635832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:48:17.647782 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 00:48:17.656997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:48:17.659605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:48:17.666827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:48:17.675993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:48:17.682064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:48:17.688519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:48:17.688698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:48:17.694504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:48:17.694641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:48:17.700462 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:48:17.700607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:48:17.708845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:48:17.709087 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:48:17.712745 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 00:48:17.720441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:48:17.724700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:48:17.731705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:48:17.743112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:48:17.749129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:48:17.754622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:48:17.754690 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 00:48:17.759951 systemd[1]: Finished ensure-sysext.service. Mar 4 00:48:17.763757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:48:17.763898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:48:17.769444 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:48:17.769573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:48:17.774860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:48:17.774985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:48:17.780906 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:48:17.781020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:48:17.789942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:48:17.790010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:48:17.923656 systemd-resolved[1627]: Positive Trust Anchors: Mar 4 00:48:17.923670 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:48:17.923702 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:48:18.242410 systemd-resolved[1627]: Using system hostname 'ci-4081.3.6-n-32bda88c6e'. Mar 4 00:48:18.244009 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:48:18.250190 systemd[1]: Reached target network.target - Network. Mar 4 00:48:18.254407 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 00:48:18.259313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:48:18.360855 augenrules[1659]: No rules Mar 4 00:48:18.362370 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:48:18.795027 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 00:48:21.016398 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 00:48:21.024073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 00:48:24.491584 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 00:48:24.505911 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 00:48:24.516716 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 00:48:24.527769 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 00:48:24.533411 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:48:24.538434 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 00:48:24.543987 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 00:48:24.549836 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 00:48:24.554811 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 00:48:24.560685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 00:48:24.566388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 00:48:24.566507 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:48:24.570545 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:48:24.577609 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 00:48:24.583737 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 00:48:24.592257 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 00:48:24.597373 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 00:48:24.602063 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:48:24.606139 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:48:24.610388 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:48:24.610414 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:48:24.622636 systemd[1]: Starting chronyd.service - NTP client/server... Mar 4 00:48:24.628679 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 00:48:24.639712 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 00:48:24.645820 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 4 00:48:24.649693 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 00:48:24.655278 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 00:48:24.662719 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 00:48:24.663486 jq[1678]: false Mar 4 00:48:24.668645 chronyd[1681]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 4 00:48:24.669201 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 00:48:24.669236 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 4 00:48:24.670238 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 4 00:48:24.676013 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 4 00:48:24.677040 KVP[1682]: KVP starting; pid is:1682 Mar 4 00:48:24.678675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:48:24.687734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 00:48:24.690930 chronyd[1681]: Timezone right/UTC failed leap second check, ignoring Mar 4 00:48:24.691849 chronyd[1681]: Loaded seccomp filter (level 2) Mar 4 00:48:24.695825 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 00:48:24.704701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 00:48:24.715243 KVP[1682]: KVP LIC Version: 3.1 Mar 4 00:48:24.715582 kernel: hv_utils: KVP IC version 4.0 Mar 4 00:48:24.721711 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 00:48:24.729747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 00:48:24.741947 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 00:48:24.747733 extend-filesystems[1679]: Found loop4 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found loop5 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found loop6 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found loop7 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda1 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda2 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda3 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found usr Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda4 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda6 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda7 Mar 4 00:48:24.747733 extend-filesystems[1679]: Found sda9 Mar 4 00:48:24.747733 extend-filesystems[1679]: Checking size of /dev/sda9 Mar 4 00:48:24.916818 extend-filesystems[1679]: Old size kept for /dev/sda9 Mar 4 00:48:24.916818 extend-filesystems[1679]: Found sr0 Mar 4 00:48:24.893081 dbus-daemon[1675]: [system] SELinux support is enabled Mar 4 00:48:24.750192 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 00:48:24.750697 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 00:48:24.933497 update_engine[1699]: I20260304 00:48:24.840978 1699 main.cc:92] Flatcar Update Engine starting Mar 4 00:48:24.933497 update_engine[1699]: I20260304 00:48:24.911373 1699 update_check_scheduler.cc:74] Next update check in 4m46s Mar 4 00:48:24.751296 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 00:48:24.758683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 00:48:24.933936 jq[1700]: true Mar 4 00:48:24.776297 systemd[1]: Started chronyd.service - NTP client/server. Mar 4 00:48:24.792659 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 00:48:24.792850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 00:48:24.934293 tar[1708]: linux-arm64/LICENSE Mar 4 00:48:24.934293 tar[1708]: linux-arm64/helm Mar 4 00:48:24.801787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 00:48:24.934540 jq[1717]: true Mar 4 00:48:24.801967 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 00:48:24.829511 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 00:48:24.829721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 00:48:24.846095 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 00:48:24.846250 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 00:48:24.851428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 00:48:24.893485 (ntainerd)[1718]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 00:48:24.907908 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 00:48:24.919310 systemd-logind[1694]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 00:48:24.926688 systemd-logind[1694]: New seat seat0. Mar 4 00:48:24.936246 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 00:48:24.936289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 00:48:24.942395 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 00:48:24.942412 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 00:48:24.949233 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 00:48:24.958897 systemd[1]: Started update-engine.service - Update Engine. Mar 4 00:48:24.982700 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1741) Mar 4 00:48:25.004895 coreos-metadata[1674]: Mar 04 00:48:25.004 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.008 INFO Fetch successful Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.008 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.013 INFO Fetch successful Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.015 INFO Fetching http://168.63.129.16/machine/567430a6-4f7c-4ca4-89de-524f0fb08d7c/38f653ab%2Ddd93%2D4900%2Da8cf%2D6eb583fd99e0.%5Fci%2D4081.3.6%2Dn%2D32bda88c6e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.015 INFO Fetch successful Mar 4 00:48:25.021306 coreos-metadata[1674]: Mar 04 00:48:25.015 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:48:25.016301 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 00:48:25.035389 coreos-metadata[1674]: Mar 04 00:48:25.035 INFO Fetch successful Mar 4 00:48:25.086763 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 00:48:25.096246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 00:48:25.098326 bash[1763]: Updated "/home/core/.ssh/authorized_keys" Mar 4 00:48:25.101402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 00:48:25.115325 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 00:48:25.249373 locksmithd[1762]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 00:48:25.629514 sshd_keygen[1698]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 00:48:25.655324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 00:48:25.670658 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 00:48:25.678284 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 4 00:48:25.692637 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 00:48:25.692816 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 00:48:25.712840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 00:48:25.739693 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 4 00:48:25.751598 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 00:48:25.756663 tar[1708]: linux-arm64/README.md Mar 4 00:48:25.770880 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 00:48:25.785867 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 4 00:48:25.794361 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 00:48:25.799636 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 00:48:25.816689 containerd[1718]: time="2026-03-04T00:48:25.816539020Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 00:48:25.844036 containerd[1718]: time="2026-03-04T00:48:25.843833860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845259140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845291260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845306700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845451180Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845466420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845521700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845533380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845696460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845713020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845725100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846334 containerd[1718]: time="2026-03-04T00:48:25.845735180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.845798580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.845996660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.846093260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.846106380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.846175900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 00:48:25.846596 containerd[1718]: time="2026-03-04T00:48:25.846213380Z" level=info msg="metadata content store policy set" policy=shared Mar 4 00:48:25.863862 containerd[1718]: time="2026-03-04T00:48:25.863828500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 00:48:25.864002 containerd[1718]: time="2026-03-04T00:48:25.863988620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 00:48:25.864232 containerd[1718]: time="2026-03-04T00:48:25.864194460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 00:48:25.864295 containerd[1718]: time="2026-03-04T00:48:25.864239780Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 00:48:25.864295 containerd[1718]: time="2026-03-04T00:48:25.864258140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 00:48:25.864455 containerd[1718]: time="2026-03-04T00:48:25.864430460Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 00:48:25.864713 containerd[1718]: time="2026-03-04T00:48:25.864694100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864894900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864923900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864937780Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864953660Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864969820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.864995 containerd[1718]: time="2026-03-04T00:48:25.864981860Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865205 containerd[1718]: time="2026-03-04T00:48:25.865141940Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865205 containerd[1718]: time="2026-03-04T00:48:25.865167900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865205 containerd[1718]: time="2026-03-04T00:48:25.865181180Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865205 containerd[1718]: time="2026-03-04T00:48:25.865193180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865358 containerd[1718]: time="2026-03-04T00:48:25.865292700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 00:48:25.865358 containerd[1718]: time="2026-03-04T00:48:25.865319340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865358 containerd[1718]: time="2026-03-04T00:48:25.865334260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865358 containerd[1718]: time="2026-03-04T00:48:25.865346300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865544 containerd[1718]: time="2026-03-04T00:48:25.865480260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865544 containerd[1718]: time="2026-03-04T00:48:25.865502020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865544 containerd[1718]: time="2026-03-04T00:48:25.865517940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865544 containerd[1718]: time="2026-03-04T00:48:25.865530020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865721 containerd[1718]: time="2026-03-04T00:48:25.865654340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865721 containerd[1718]: time="2026-03-04T00:48:25.865674620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865721 containerd[1718]: time="2026-03-04T00:48:25.865693980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865721 containerd[1718]: time="2026-03-04T00:48:25.865705860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865894 containerd[1718]: time="2026-03-04T00:48:25.865825460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865894 containerd[1718]: time="2026-03-04T00:48:25.865850380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.865894 containerd[1718]: time="2026-03-04T00:48:25.865868380Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 00:48:25.866047 containerd[1718]: time="2026-03-04T00:48:25.865977180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.866047 containerd[1718]: time="2026-03-04T00:48:25.865996420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.866047 containerd[1718]: time="2026-03-04T00:48:25.866007340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 00:48:25.866192 containerd[1718]: time="2026-03-04T00:48:25.866144860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 00:48:25.866192 containerd[1718]: time="2026-03-04T00:48:25.866173020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 00:48:25.866331 containerd[1718]: time="2026-03-04T00:48:25.866184180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 00:48:25.866331 containerd[1718]: time="2026-03-04T00:48:25.866273580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 00:48:25.866331 containerd[1718]: time="2026-03-04T00:48:25.866286500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.866331 containerd[1718]: time="2026-03-04T00:48:25.866304660Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 00:48:25.866331 containerd[1718]: time="2026-03-04T00:48:25.866314460Z" level=info msg="NRI interface is disabled by configuration." Mar 4 00:48:25.866510 containerd[1718]: time="2026-03-04T00:48:25.866440020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 00:48:25.866875 containerd[1718]: time="2026-03-04T00:48:25.866814740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 00:48:25.867574 containerd[1718]: time="2026-03-04T00:48:25.867040300Z" level=info msg="Connect containerd service" Mar 4 00:48:25.867574 containerd[1718]: time="2026-03-04T00:48:25.867088580Z" level=info msg="using legacy CRI server" Mar 4 00:48:25.867574 containerd[1718]: time="2026-03-04T00:48:25.867097060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 00:48:25.867574 containerd[1718]: time="2026-03-04T00:48:25.867183060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 00:48:25.867961 containerd[1718]: time="2026-03-04T00:48:25.867931820Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 00:48:25.868176 containerd[1718]: time="2026-03-04T00:48:25.868134220Z" level=info msg="Start subscribing containerd event" Mar 4 00:48:25.868238 containerd[1718]: time="2026-03-04T00:48:25.868193540Z" level=info msg="Start recovering state" Mar 4 00:48:25.868291 containerd[1718]: time="2026-03-04T00:48:25.868275220Z" level=info msg="Start event monitor" Mar 4 00:48:25.868318 containerd[1718]: time="2026-03-04T00:48:25.868291900Z" level=info msg="Start snapshots syncer" Mar 4 00:48:25.868318 containerd[1718]: time="2026-03-04T00:48:25.868302420Z" level=info msg="Start cni network conf syncer for default" Mar 4 00:48:25.868318 containerd[1718]: time="2026-03-04T00:48:25.868309660Z" level=info msg="Start streaming server" Mar 4 00:48:25.868520 containerd[1718]: time="2026-03-04T00:48:25.868494620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 00:48:25.869617 containerd[1718]: time="2026-03-04T00:48:25.868547420Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 00:48:25.869617 containerd[1718]: time="2026-03-04T00:48:25.868611980Z" level=info msg="containerd successfully booted in 0.054366s" Mar 4 00:48:25.868739 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 00:48:25.949465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:48:25.955870 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 00:48:25.956549 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:48:25.960871 systemd[1]: Startup finished in 611ms (kernel) + 12.698s (initrd) + 24.341s (userspace) = 37.651s. Mar 4 00:48:26.225489 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:26.229687 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:26.240412 systemd-logind[1694]: New session 1 of user core. Mar 4 00:48:26.242267 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 00:48:26.249248 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 00:48:26.252296 systemd-logind[1694]: New session 2 of user core. Mar 4 00:48:26.280725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 00:48:26.287919 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 00:48:26.320337 (systemd)[1853]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 00:48:26.362597 kubelet[1841]: E0304 00:48:26.361549 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:48:26.364281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:48:26.364417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:48:26.461896 systemd[1853]: Queued start job for default target default.target. Mar 4 00:48:26.469417 systemd[1853]: Created slice app.slice - User Application Slice. Mar 4 00:48:26.469439 systemd[1853]: Reached target paths.target - Paths. Mar 4 00:48:26.469451 systemd[1853]: Reached target timers.target - Timers. Mar 4 00:48:26.470666 systemd[1853]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 00:48:26.481277 systemd[1853]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 00:48:26.481342 systemd[1853]: Reached target sockets.target - Sockets. Mar 4 00:48:26.481354 systemd[1853]: Reached target basic.target - Basic System. Mar 4 00:48:26.481397 systemd[1853]: Reached target default.target - Main User Target. Mar 4 00:48:26.481423 systemd[1853]: Startup finished in 155ms. Mar 4 00:48:26.481547 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 00:48:26.483365 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 00:48:26.484787 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 00:48:27.615578 waagent[1825]: 2026-03-04T00:48:27.614351Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 4 00:48:27.619593 waagent[1825]: 2026-03-04T00:48:27.619517Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 4 00:48:27.623613 waagent[1825]: 2026-03-04T00:48:27.623571Z INFO Daemon Daemon Python: 3.11.9 Mar 4 00:48:27.628674 waagent[1825]: 2026-03-04T00:48:27.628614Z INFO Daemon Daemon Run daemon Mar 4 00:48:27.632536 waagent[1825]: 2026-03-04T00:48:27.632491Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 4 00:48:27.640204 waagent[1825]: 2026-03-04T00:48:27.640158Z INFO Daemon Daemon Using waagent for provisioning Mar 4 00:48:27.644990 waagent[1825]: 2026-03-04T00:48:27.644953Z INFO Daemon Daemon Activate resource disk Mar 4 00:48:27.649375 waagent[1825]: 2026-03-04T00:48:27.649338Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 4 00:48:27.659319 waagent[1825]: 2026-03-04T00:48:27.659274Z INFO Daemon Daemon Found device: None Mar 4 00:48:27.663112 waagent[1825]: 2026-03-04T00:48:27.663071Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 4 00:48:27.670032 waagent[1825]: 2026-03-04T00:48:27.669998Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 4 00:48:27.681322 waagent[1825]: 2026-03-04T00:48:27.681276Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:48:27.686399 waagent[1825]: 2026-03-04T00:48:27.686355Z INFO Daemon Daemon Running default provisioning handler Mar 4 00:48:27.697411 waagent[1825]: 2026-03-04T00:48:27.697330Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 4 00:48:27.708627 waagent[1825]: 2026-03-04T00:48:27.708546Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 4 00:48:27.716499 waagent[1825]: 2026-03-04T00:48:27.716451Z INFO Daemon Daemon cloud-init is enabled: False Mar 4 00:48:27.720620 waagent[1825]: 2026-03-04T00:48:27.720580Z INFO Daemon Daemon Copying ovf-env.xml Mar 4 00:48:27.861979 waagent[1825]: 2026-03-04T00:48:27.861618Z INFO Daemon Daemon Successfully mounted dvd Mar 4 00:48:27.895643 waagent[1825]: 2026-03-04T00:48:27.895506Z INFO Daemon Daemon Detect protocol endpoint Mar 4 00:48:27.895702 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 4 00:48:27.900310 waagent[1825]: 2026-03-04T00:48:27.900251Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:48:27.904897 waagent[1825]: 2026-03-04T00:48:27.904852Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 4 00:48:27.910571 waagent[1825]: 2026-03-04T00:48:27.910007Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 4 00:48:27.914532 waagent[1825]: 2026-03-04T00:48:27.914491Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 4 00:48:27.918578 waagent[1825]: 2026-03-04T00:48:27.918534Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 4 00:48:27.968194 waagent[1825]: 2026-03-04T00:48:27.968150Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 4 00:48:27.973619 waagent[1825]: 2026-03-04T00:48:27.973594Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 4 00:48:27.977874 waagent[1825]: 2026-03-04T00:48:27.977842Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 4 00:48:28.358052 waagent[1825]: 2026-03-04T00:48:28.357902Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 4 00:48:28.363299 waagent[1825]: 2026-03-04T00:48:28.363247Z INFO Daemon Daemon Forcing an update of the goal state. Mar 4 00:48:28.371375 waagent[1825]: 2026-03-04T00:48:28.371331Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:48:28.390663 waagent[1825]: 2026-03-04T00:48:28.390625Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 4 00:48:28.395389 waagent[1825]: 2026-03-04T00:48:28.395350Z INFO Daemon Mar 4 00:48:28.397675 waagent[1825]: 2026-03-04T00:48:28.397634Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8d400fd3-4a56-4845-83b6-27f838dce9ca eTag: 7818039193865425841 source: Fabric] Mar 4 00:48:28.406834 waagent[1825]: 2026-03-04T00:48:28.406796Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 4 00:48:28.412301 waagent[1825]: 2026-03-04T00:48:28.412264Z INFO Daemon Mar 4 00:48:28.414607 waagent[1825]: 2026-03-04T00:48:28.414570Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:48:28.424336 waagent[1825]: 2026-03-04T00:48:28.424304Z INFO Daemon Daemon Downloading artifacts profile blob Mar 4 00:48:28.562167 waagent[1825]: 2026-03-04T00:48:28.562086Z INFO Daemon Downloaded certificate {'thumbprint': '3AFA986D067F431DA1D21438F74681E8A2015DF2', 'hasPrivateKey': True} Mar 4 00:48:28.570183 waagent[1825]: 2026-03-04T00:48:28.570139Z INFO Daemon Fetch goal state completed Mar 4 00:48:28.610578 waagent[1825]: 2026-03-04T00:48:28.610473Z INFO Daemon Daemon Starting provisioning Mar 4 00:48:28.614999 waagent[1825]: 2026-03-04T00:48:28.614946Z INFO Daemon Daemon Handle ovf-env.xml. Mar 4 00:48:28.618842 waagent[1825]: 2026-03-04T00:48:28.618807Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-32bda88c6e] Mar 4 00:48:28.645970 waagent[1825]: 2026-03-04T00:48:28.645902Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-32bda88c6e] Mar 4 00:48:28.651018 waagent[1825]: 2026-03-04T00:48:28.650971Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 4 00:48:28.656050 waagent[1825]: 2026-03-04T00:48:28.656009Z INFO Daemon Daemon Primary interface is [eth0] Mar 4 00:48:28.686301 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:48:28.686309 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:48:28.686353 systemd-networkd[1359]: eth0: DHCP lease lost Mar 4 00:48:28.687814 waagent[1825]: 2026-03-04T00:48:28.687734Z INFO Daemon Daemon Create user account if not exists Mar 4 00:48:28.692344 waagent[1825]: 2026-03-04T00:48:28.692298Z INFO Daemon Daemon User core already exists, skip useradd Mar 4 00:48:28.696824 waagent[1825]: 2026-03-04T00:48:28.696788Z INFO Daemon Daemon Configure sudoer Mar 4 00:48:28.697656 systemd-networkd[1359]: eth0: DHCPv6 lease lost Mar 4 00:48:28.700515 waagent[1825]: 2026-03-04T00:48:28.700463Z INFO Daemon Daemon Configure sshd Mar 4 00:48:28.704121 waagent[1825]: 2026-03-04T00:48:28.704075Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 4 00:48:28.714058 waagent[1825]: 2026-03-04T00:48:28.714018Z INFO Daemon Daemon Deploy ssh public key. Mar 4 00:48:28.723609 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:48:29.831590 waagent[1825]: 2026-03-04T00:48:29.827498Z INFO Daemon Daemon Provisioning complete Mar 4 00:48:29.846101 waagent[1825]: 2026-03-04T00:48:29.846056Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 4 00:48:29.851560 waagent[1825]: 2026-03-04T00:48:29.851511Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 4 00:48:29.859363 waagent[1825]: 2026-03-04T00:48:29.859322Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 4 00:48:30.043528 waagent[1905]: 2026-03-04T00:48:30.042922Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 4 00:48:30.043528 waagent[1905]: 2026-03-04T00:48:30.043068Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 4 00:48:30.043528 waagent[1905]: 2026-03-04T00:48:30.043121Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 4 00:48:30.403587 waagent[1905]: 2026-03-04T00:48:30.402779Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 4 00:48:30.403587 waagent[1905]: 2026-03-04T00:48:30.403012Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:48:30.403587 waagent[1905]: 2026-03-04T00:48:30.403072Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:48:30.411762 waagent[1905]: 2026-03-04T00:48:30.411699Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:48:30.417506 waagent[1905]: 2026-03-04T00:48:30.417466Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 4 00:48:30.417996 waagent[1905]: 2026-03-04T00:48:30.417956Z INFO ExtHandler Mar 4 00:48:30.418066 waagent[1905]: 2026-03-04T00:48:30.418039Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4d076a42-ee48-4053-9842-00d644b270a6 eTag: 7818039193865425841 source: Fabric] Mar 4 00:48:30.418345 waagent[1905]: 2026-03-04T00:48:30.418310Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 4 00:48:30.432723 waagent[1905]: 2026-03-04T00:48:30.432073Z INFO ExtHandler Mar 4 00:48:30.432723 waagent[1905]: 2026-03-04T00:48:30.432252Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:48:30.436864 waagent[1905]: 2026-03-04T00:48:30.436546Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 4 00:48:30.641261 waagent[1905]: 2026-03-04T00:48:30.641166Z INFO ExtHandler Downloaded certificate {'thumbprint': '3AFA986D067F431DA1D21438F74681E8A2015DF2', 'hasPrivateKey': True} Mar 4 00:48:30.641826 waagent[1905]: 2026-03-04T00:48:30.641785Z INFO ExtHandler Fetch goal state completed Mar 4 00:48:30.657161 waagent[1905]: 2026-03-04T00:48:30.657073Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1905 Mar 4 00:48:30.657264 waagent[1905]: 2026-03-04T00:48:30.657232Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 4 00:48:30.658842 waagent[1905]: 2026-03-04T00:48:30.658804Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 4 00:48:30.659193 waagent[1905]: 2026-03-04T00:48:30.659159Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 4 00:48:30.718536 waagent[1905]: 2026-03-04T00:48:30.718493Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 4 00:48:30.718748 waagent[1905]: 2026-03-04T00:48:30.718709Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 4 00:48:30.724509 waagent[1905]: 2026-03-04T00:48:30.724463Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 4 00:48:30.730907 systemd[1]: Reloading requested from client PID 1918 ('systemctl') (unit waagent.service)... Mar 4 00:48:30.731158 systemd[1]: Reloading... Mar 4 00:48:30.817598 zram_generator::config[1964]: No configuration found. Mar 4 00:48:30.902140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:48:30.977899 systemd[1]: Reloading finished in 246 ms. Mar 4 00:48:31.002647 waagent[1905]: 2026-03-04T00:48:31.001812Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 4 00:48:31.008588 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit waagent.service)... Mar 4 00:48:31.008601 systemd[1]: Reloading... Mar 4 00:48:31.084584 zram_generator::config[2040]: No configuration found. Mar 4 00:48:31.191627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:48:31.267047 systemd[1]: Reloading finished in 258 ms. Mar 4 00:48:31.290972 waagent[1905]: 2026-03-04T00:48:31.290125Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 4 00:48:31.290972 waagent[1905]: 2026-03-04T00:48:31.290313Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 4 00:48:31.680105 waagent[1905]: 2026-03-04T00:48:31.679986Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 4 00:48:31.680779 waagent[1905]: 2026-03-04T00:48:31.680735Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 4 00:48:31.681598 waagent[1905]: 2026-03-04T00:48:31.681535Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 4 00:48:31.681888 waagent[1905]: 2026-03-04T00:48:31.681725Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:48:31.682080 waagent[1905]: 2026-03-04T00:48:31.682027Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 4 00:48:31.682191 waagent[1905]: 2026-03-04T00:48:31.682126Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:48:31.682275 waagent[1905]: 2026-03-04T00:48:31.682245Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:48:31.682331 waagent[1905]: 2026-03-04T00:48:31.682305Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:48:31.682479 waagent[1905]: 2026-03-04T00:48:31.682444Z INFO EnvHandler ExtHandler Configure routes Mar 4 00:48:31.682799 waagent[1905]: 2026-03-04T00:48:31.682750Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 4 00:48:31.683112 waagent[1905]: 2026-03-04T00:48:31.683069Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 4 00:48:31.683305 waagent[1905]: 2026-03-04T00:48:31.683266Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 4 00:48:31.683305 waagent[1905]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 4 00:48:31.683305 waagent[1905]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 4 00:48:31.683305 waagent[1905]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 4 00:48:31.683305 waagent[1905]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:48:31.683305 waagent[1905]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:48:31.683305 waagent[1905]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:48:31.683852 waagent[1905]: 2026-03-04T00:48:31.683770Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 4 00:48:31.683951 waagent[1905]: 2026-03-04T00:48:31.683908Z INFO EnvHandler ExtHandler Gateway:None Mar 4 00:48:31.684008 waagent[1905]: 2026-03-04T00:48:31.683982Z INFO EnvHandler ExtHandler Routes:None Mar 4 00:48:31.684598 waagent[1905]: 2026-03-04T00:48:31.684441Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 4 00:48:31.684598 waagent[1905]: 2026-03-04T00:48:31.684509Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 4 00:48:31.684915 waagent[1905]: 2026-03-04T00:48:31.684839Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 4 00:48:31.691015 waagent[1905]: 2026-03-04T00:48:31.690970Z INFO ExtHandler ExtHandler Mar 4 00:48:31.691350 waagent[1905]: 2026-03-04T00:48:31.691308Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 02e325ec-e9bb-462b-981d-5e689bc7d269 correlation 5e11c59d-0881-48db-857e-e90316d9d742 created: 2026-03-04T00:47:17.830208Z] Mar 4 00:48:31.691799 waagent[1905]: 2026-03-04T00:48:31.691758Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 4 00:48:31.693039 waagent[1905]: 2026-03-04T00:48:31.692362Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 4 00:48:31.726947 waagent[1905]: 2026-03-04T00:48:31.725697Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A2E82FB9-9DB6-474F-AF48-994EA1765B06;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 4 00:48:31.757688 waagent[1905]: 2026-03-04T00:48:31.757616Z INFO MonitorHandler ExtHandler Network interfaces: Mar 4 00:48:31.757688 waagent[1905]: Executing ['ip', '-a', '-o', 'link']: Mar 4 00:48:31.757688 waagent[1905]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 4 00:48:31.757688 waagent[1905]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:e8:35 brd ff:ff:ff:ff:ff:ff Mar 4 00:48:31.757688 waagent[1905]: 3: enP28871s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:e8:35 brd ff:ff:ff:ff:ff:ff\ altname enP28871p0s2 Mar 4 00:48:31.757688 waagent[1905]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 4 00:48:31.757688 waagent[1905]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 4 00:48:31.757688 waagent[1905]: 2: eth0 inet 10.200.20.22/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 4 00:48:31.757688 waagent[1905]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 4 00:48:31.757688 waagent[1905]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 4 00:48:31.757688 waagent[1905]: 2: eth0 inet6 fe80::20d:3aff:fef5:e835/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 4 00:48:31.823665 waagent[1905]: 2026-03-04T00:48:31.823600Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 4 00:48:31.823665 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.823665 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.823665 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.823665 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.823665 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.823665 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.823665 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:48:31.823665 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:48:31.823665 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:48:31.826546 waagent[1905]: 2026-03-04T00:48:31.826490Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 4 00:48:31.826546 waagent[1905]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.826546 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.826546 waagent[1905]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.826546 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.826546 waagent[1905]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:48:31.826546 waagent[1905]: pkts bytes target prot opt in out source destination Mar 4 00:48:31.826546 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:48:31.826546 waagent[1905]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:48:31.826546 waagent[1905]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:48:31.826792 waagent[1905]: 2026-03-04T00:48:31.826761Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 4 00:48:33.912224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 00:48:33.913278 systemd[1]: Started sshd@0-10.200.20.22:22-10.200.16.10:56926.service - OpenSSH per-connection server daemon (10.200.16.10:56926). Mar 4 00:48:34.532936 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 56926 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:34.534246 sshd[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:34.537653 systemd-logind[1694]: New session 3 of user core. Mar 4 00:48:34.547716 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 00:48:34.968218 systemd[1]: Started sshd@1-10.200.20.22:22-10.200.16.10:56928.service - OpenSSH per-connection server daemon (10.200.16.10:56928). Mar 4 00:48:35.459652 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 56928 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:35.460806 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:35.464534 systemd-logind[1694]: New session 4 of user core. Mar 4 00:48:35.470677 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 00:48:35.813319 sshd[2131]: pam_unix(sshd:session): session closed for user core Mar 4 00:48:35.816641 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Mar 4 00:48:35.816878 systemd[1]: sshd@1-10.200.20.22:22-10.200.16.10:56928.service: Deactivated successfully. Mar 4 00:48:35.818289 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 00:48:35.822091 systemd-logind[1694]: Removed session 4. Mar 4 00:48:35.899984 systemd[1]: Started sshd@2-10.200.20.22:22-10.200.16.10:56936.service - OpenSSH per-connection server daemon (10.200.16.10:56936). Mar 4 00:48:36.387591 sshd[2138]: Accepted publickey for core from 10.200.16.10 port 56936 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:36.388520 sshd[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:36.389313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 00:48:36.396755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:48:36.403393 systemd-logind[1694]: New session 5 of user core. Mar 4 00:48:36.403810 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 00:48:36.505682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:48:36.505918 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:48:36.593583 kubelet[2149]: E0304 00:48:36.593528 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:48:36.597474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:48:36.597623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:48:36.729777 sshd[2138]: pam_unix(sshd:session): session closed for user core Mar 4 00:48:36.732940 systemd[1]: sshd@2-10.200.20.22:22-10.200.16.10:56936.service: Deactivated successfully. Mar 4 00:48:36.734755 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 00:48:36.736230 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Mar 4 00:48:36.737272 systemd-logind[1694]: Removed session 5. Mar 4 00:48:36.821885 systemd[1]: Started sshd@3-10.200.20.22:22-10.200.16.10:56944.service - OpenSSH per-connection server daemon (10.200.16.10:56944). Mar 4 00:48:37.304590 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 56944 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:37.305534 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:37.310041 systemd-logind[1694]: New session 6 of user core. Mar 4 00:48:37.315772 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 00:48:37.655447 sshd[2159]: pam_unix(sshd:session): session closed for user core Mar 4 00:48:37.658890 systemd[1]: sshd@3-10.200.20.22:22-10.200.16.10:56944.service: Deactivated successfully. Mar 4 00:48:37.660336 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 00:48:37.663022 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Mar 4 00:48:37.663759 systemd-logind[1694]: Removed session 6. Mar 4 00:48:37.745348 systemd[1]: Started sshd@4-10.200.20.22:22-10.200.16.10:56948.service - OpenSSH per-connection server daemon (10.200.16.10:56948). Mar 4 00:48:38.232591 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 56948 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:38.233506 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:38.238090 systemd-logind[1694]: New session 7 of user core. Mar 4 00:48:38.244697 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 00:48:40.764872 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 00:48:40.765142 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:48:40.778654 sudo[2169]: pam_unix(sudo:session): session closed for user root Mar 4 00:48:40.856775 sshd[2166]: pam_unix(sshd:session): session closed for user core Mar 4 00:48:40.861342 systemd[1]: sshd@4-10.200.20.22:22-10.200.16.10:56948.service: Deactivated successfully. Mar 4 00:48:40.863111 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 00:48:40.865106 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Mar 4 00:48:40.866211 systemd-logind[1694]: Removed session 7. Mar 4 00:48:40.943635 systemd[1]: Started sshd@5-10.200.20.22:22-10.200.16.10:38550.service - OpenSSH per-connection server daemon (10.200.16.10:38550). Mar 4 00:48:41.431505 sshd[2174]: Accepted publickey for core from 10.200.16.10 port 38550 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:41.432349 sshd[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:41.436816 systemd-logind[1694]: New session 8 of user core. Mar 4 00:48:41.443772 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 00:48:41.705175 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 00:48:41.705838 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:48:41.709058 sudo[2178]: pam_unix(sudo:session): session closed for user root Mar 4 00:48:41.713618 sudo[2177]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 00:48:41.713888 sudo[2177]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:48:41.731176 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 00:48:41.732114 auditctl[2181]: No rules Mar 4 00:48:41.733224 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 00:48:41.733507 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 00:48:41.737624 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:48:41.758118 augenrules[2199]: No rules Mar 4 00:48:41.759590 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:48:41.762104 sudo[2177]: pam_unix(sudo:session): session closed for user root Mar 4 00:48:41.839775 sshd[2174]: pam_unix(sshd:session): session closed for user core Mar 4 00:48:41.844119 systemd[1]: sshd@5-10.200.20.22:22-10.200.16.10:38550.service: Deactivated successfully. Mar 4 00:48:41.845987 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 00:48:41.847256 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Mar 4 00:48:41.848015 systemd-logind[1694]: Removed session 8. Mar 4 00:48:41.927299 systemd[1]: Started sshd@6-10.200.20.22:22-10.200.16.10:38566.service - OpenSSH per-connection server daemon (10.200.16.10:38566). Mar 4 00:48:42.416587 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 38566 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:48:42.417345 sshd[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:48:42.421694 systemd-logind[1694]: New session 9 of user core. Mar 4 00:48:42.428791 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 00:48:42.690988 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 00:48:42.691249 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:48:45.075773 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 00:48:45.075919 (dockerd)[2225]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 00:48:46.699721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 00:48:46.710752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:48:46.907146 dockerd[2225]: time="2026-03-04T00:48:46.907100060Z" level=info msg="Starting up" Mar 4 00:48:48.490928 chronyd[1681]: Selected source PHC0 Mar 4 00:48:48.839386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:48:48.843058 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:48:48.875035 kubelet[2251]: E0304 00:48:48.874985 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:48:48.877808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:48:48.877936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:48:51.203030 dockerd[2225]: time="2026-03-04T00:48:51.202977277Z" level=info msg="Loading containers: start." Mar 4 00:48:51.389976 kernel: Initializing XFRM netlink socket Mar 4 00:48:51.556319 systemd-networkd[1359]: docker0: Link UP Mar 4 00:48:51.585668 dockerd[2225]: time="2026-03-04T00:48:51.585629917Z" level=info msg="Loading containers: done." Mar 4 00:48:51.595525 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck282137633-merged.mount: Deactivated successfully. Mar 4 00:48:51.609578 dockerd[2225]: time="2026-03-04T00:48:51.609385197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 00:48:51.609578 dockerd[2225]: time="2026-03-04T00:48:51.609496717Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 00:48:51.609702 dockerd[2225]: time="2026-03-04T00:48:51.609636957Z" level=info msg="Daemon has completed initialization" Mar 4 00:48:51.679339 dockerd[2225]: time="2026-03-04T00:48:51.678077997Z" level=info msg="API listen on /run/docker.sock" Mar 4 00:48:51.678961 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 00:48:52.057397 containerd[1718]: time="2026-03-04T00:48:52.057166757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 4 00:48:52.970595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230874972.mount: Deactivated successfully. Mar 4 00:48:54.592587 containerd[1718]: time="2026-03-04T00:48:54.592486317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:54.598209 containerd[1718]: time="2026-03-04T00:48:54.598176997Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 4 00:48:54.604309 containerd[1718]: time="2026-03-04T00:48:54.604278837Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:54.612775 containerd[1718]: time="2026-03-04T00:48:54.612716277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:54.614268 containerd[1718]: time="2026-03-04T00:48:54.613858997Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.55665172s" Mar 4 00:48:54.614268 containerd[1718]: time="2026-03-04T00:48:54.613895237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 4 00:48:54.614542 containerd[1718]: time="2026-03-04T00:48:54.614512797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 4 00:48:56.293986 containerd[1718]: time="2026-03-04T00:48:56.293931677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:56.296985 containerd[1718]: time="2026-03-04T00:48:56.296953437Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 4 00:48:56.300521 containerd[1718]: time="2026-03-04T00:48:56.300480077Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:56.305636 containerd[1718]: time="2026-03-04T00:48:56.305555557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:56.306684 containerd[1718]: time="2026-03-04T00:48:56.306576157Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.69134904s" Mar 4 00:48:56.306684 containerd[1718]: time="2026-03-04T00:48:56.306606677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 4 00:48:56.307280 containerd[1718]: time="2026-03-04T00:48:56.307241917Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 4 00:48:57.944914 containerd[1718]: time="2026-03-04T00:48:57.944839433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:57.948366 containerd[1718]: time="2026-03-04T00:48:57.948335412Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 4 00:48:57.955616 containerd[1718]: time="2026-03-04T00:48:57.955593771Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:57.962685 containerd[1718]: time="2026-03-04T00:48:57.962643610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:57.964778 containerd[1718]: time="2026-03-04T00:48:57.964750501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.657476984s" Mar 4 00:48:57.964806 containerd[1718]: time="2026-03-04T00:48:57.964783782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 4 00:48:57.965175 containerd[1718]: time="2026-03-04T00:48:57.965155024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 4 00:48:58.905195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 00:48:58.915787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:48:59.036760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:48:59.040919 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:48:59.131396 kubelet[2452]: E0304 00:48:59.131357 2452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:48:59.134258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:48:59.134403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:48:59.233934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318524256.mount: Deactivated successfully. Mar 4 00:48:59.827837 containerd[1718]: time="2026-03-04T00:48:59.827786888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:59.831316 containerd[1718]: time="2026-03-04T00:48:59.831084891Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 4 00:48:59.834789 containerd[1718]: time="2026-03-04T00:48:59.834441933Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:59.838807 containerd[1718]: time="2026-03-04T00:48:59.838779136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:48:59.839339 containerd[1718]: time="2026-03-04T00:48:59.839308777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.874122873s" Mar 4 00:48:59.839387 containerd[1718]: time="2026-03-04T00:48:59.839340297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 4 00:48:59.839752 containerd[1718]: time="2026-03-04T00:48:59.839728297Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 4 00:49:00.621664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521847655.mount: Deactivated successfully. Mar 4 00:49:01.608532 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 4 00:49:02.145534 containerd[1718]: time="2026-03-04T00:49:02.145484201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:02.148670 containerd[1718]: time="2026-03-04T00:49:02.148641443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 4 00:49:02.152183 containerd[1718]: time="2026-03-04T00:49:02.152141566Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:02.158501 containerd[1718]: time="2026-03-04T00:49:02.157492089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:02.158791 containerd[1718]: time="2026-03-04T00:49:02.158764610Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.319004233s" Mar 4 00:49:02.158865 containerd[1718]: time="2026-03-04T00:49:02.158851810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 4 00:49:02.159794 containerd[1718]: time="2026-03-04T00:49:02.159765051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 00:49:03.206418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284335607.mount: Deactivated successfully. Mar 4 00:49:03.227819 containerd[1718]: time="2026-03-04T00:49:03.227759422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:03.235685 containerd[1718]: time="2026-03-04T00:49:03.235489867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 4 00:49:03.239140 containerd[1718]: time="2026-03-04T00:49:03.239101830Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:03.246092 containerd[1718]: time="2026-03-04T00:49:03.245765275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:03.246854 containerd[1718]: time="2026-03-04T00:49:03.246536715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.086738624s" Mar 4 00:49:03.246854 containerd[1718]: time="2026-03-04T00:49:03.246586915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 4 00:49:03.247577 containerd[1718]: time="2026-03-04T00:49:03.247539476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 4 00:49:04.019241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023515625.mount: Deactivated successfully. Mar 4 00:49:05.384209 containerd[1718]: time="2026-03-04T00:49:05.383132626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:05.387099 containerd[1718]: time="2026-03-04T00:49:05.387067950Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 4 00:49:05.392954 containerd[1718]: time="2026-03-04T00:49:05.392910596Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:05.399889 containerd[1718]: time="2026-03-04T00:49:05.399839642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:05.400941 containerd[1718]: time="2026-03-04T00:49:05.400905083Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.153312767s" Mar 4 00:49:05.400941 containerd[1718]: time="2026-03-04T00:49:05.400939963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 4 00:49:09.200182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 00:49:09.208868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:49:09.936552 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 00:49:09.936673 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 00:49:09.936931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:49:09.953933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:49:09.985128 systemd[1]: Reloading requested from client PID 2613 ('systemctl') (unit session-9.scope)... Mar 4 00:49:09.985140 systemd[1]: Reloading... Mar 4 00:49:10.052579 zram_generator::config[2650]: No configuration found. Mar 4 00:49:10.166896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:49:10.247582 systemd[1]: Reloading finished in 262 ms. Mar 4 00:49:10.293083 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 00:49:10.293188 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 00:49:10.293547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:49:10.304961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:49:10.329130 update_engine[1699]: I20260304 00:49:10.328584 1699 update_attempter.cc:509] Updating boot flags... Mar 4 00:49:10.570888 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2723) Mar 4 00:49:10.690605 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2725) Mar 4 00:49:10.764925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:49:10.769849 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:49:10.811150 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:49:10.811150 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:49:10.811462 kubelet[2777]: I0304 00:49:10.811187 2777 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:49:11.093884 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2725) Mar 4 00:49:11.572779 kubelet[2777]: I0304 00:49:11.572739 2777 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 00:49:11.572779 kubelet[2777]: I0304 00:49:11.572770 2777 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:49:11.572932 kubelet[2777]: I0304 00:49:11.572796 2777 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 00:49:11.572932 kubelet[2777]: I0304 00:49:11.572802 2777 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:49:11.573032 kubelet[2777]: I0304 00:49:11.573013 2777 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:49:11.583048 kubelet[2777]: E0304 00:49:11.583003 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:49:11.583989 kubelet[2777]: I0304 00:49:11.583880 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:49:11.587875 kubelet[2777]: E0304 00:49:11.587847 2777 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:49:11.588018 kubelet[2777]: I0304 00:49:11.588005 2777 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 00:49:11.591384 kubelet[2777]: I0304 00:49:11.591295 2777 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 00:49:11.591653 kubelet[2777]: I0304 00:49:11.591630 2777 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:49:11.592122 kubelet[2777]: I0304 00:49:11.591716 2777 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-32bda88c6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 00:49:11.592122 kubelet[2777]: I0304 00:49:11.591867 2777 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:49:11.592122 kubelet[2777]: I0304 00:49:11.591875 2777 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 00:49:11.592122 kubelet[2777]: I0304 00:49:11.591962 2777 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 00:49:11.600851 kubelet[2777]: I0304 00:49:11.600781 2777 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:49:11.602215 kubelet[2777]: I0304 00:49:11.602086 2777 kubelet.go:475] "Attempting to sync node with API server" Mar 4 00:49:11.602215 kubelet[2777]: I0304 00:49:11.602120 2777 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:49:11.602215 kubelet[2777]: I0304 00:49:11.602146 2777 kubelet.go:387] "Adding apiserver pod source" Mar 4 00:49:11.602215 kubelet[2777]: I0304 00:49:11.602160 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:49:11.602671 kubelet[2777]: E0304 00:49:11.602638 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32bda88c6e&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:49:11.603584 kubelet[2777]: I0304 00:49:11.603130 2777 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:49:11.603804 kubelet[2777]: I0304 00:49:11.603790 2777 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:49:11.603878 kubelet[2777]: I0304 00:49:11.603862 2777 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 00:49:11.603964 kubelet[2777]: W0304 00:49:11.603955 2777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 00:49:11.606668 kubelet[2777]: I0304 00:49:11.606622 2777 server.go:1262] "Started kubelet" Mar 4 00:49:11.608960 kubelet[2777]: E0304 00:49:11.608937 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:49:11.609071 kubelet[2777]: I0304 00:49:11.609046 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:49:11.609697 kubelet[2777]: I0304 00:49:11.609644 2777 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:49:11.609812 kubelet[2777]: I0304 00:49:11.609799 2777 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 00:49:11.609884 kubelet[2777]: I0304 00:49:11.609864 2777 server.go:310] "Adding debug handlers to kubelet server" Mar 4 00:49:11.613076 kubelet[2777]: I0304 00:49:11.613054 2777 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:49:11.613282 kubelet[2777]: E0304 00:49:11.612232 2777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-32bda88c6e.18997cfc58927799 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-32bda88c6e,UID:ci-4081.3.6-n-32bda88c6e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-32bda88c6e,},FirstTimestamp:2026-03-04 00:49:11.606597529 +0000 UTC m=+0.834268203,LastTimestamp:2026-03-04 00:49:11.606597529 +0000 UTC m=+0.834268203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-32bda88c6e,}" Mar 4 00:49:11.615623 kubelet[2777]: I0304 00:49:11.615588 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:49:11.616643 kubelet[2777]: I0304 00:49:11.616163 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:49:11.620072 kubelet[2777]: E0304 00:49:11.620052 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" Mar 4 00:49:11.620184 kubelet[2777]: I0304 00:49:11.620173 2777 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 00:49:11.621121 kubelet[2777]: I0304 00:49:11.621106 2777 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 00:49:11.621252 kubelet[2777]: I0304 00:49:11.621243 2777 reconciler.go:29] "Reconciler: start to sync state" Mar 4 00:49:11.621481 kubelet[2777]: E0304 00:49:11.621466 2777 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 00:49:11.622758 kubelet[2777]: I0304 00:49:11.622740 2777 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:49:11.622909 kubelet[2777]: I0304 00:49:11.622894 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:49:11.623072 kubelet[2777]: I0304 00:49:11.622956 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 00:49:11.623296 kubelet[2777]: E0304 00:49:11.623275 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:49:11.624080 kubelet[2777]: E0304 00:49:11.623829 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32bda88c6e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="200ms" Mar 4 00:49:11.627301 kubelet[2777]: I0304 00:49:11.627282 2777 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:49:11.658509 kubelet[2777]: I0304 00:49:11.658467 2777 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 00:49:11.658509 kubelet[2777]: I0304 00:49:11.658497 2777 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 00:49:11.658656 kubelet[2777]: I0304 00:49:11.658521 2777 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 00:49:11.658656 kubelet[2777]: E0304 00:49:11.658575 2777 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:49:11.659322 kubelet[2777]: E0304 00:49:11.659056 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:49:11.685095 kubelet[2777]: I0304 00:49:11.685061 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:49:11.685095 kubelet[2777]: I0304 00:49:11.685082 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:49:11.685238 kubelet[2777]: I0304 00:49:11.685115 2777 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:49:11.706220 kubelet[2777]: I0304 00:49:11.706177 2777 policy_none.go:49] "None policy: Start" Mar 4 00:49:11.706220 kubelet[2777]: I0304 00:49:11.706230 2777 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 00:49:11.706388 kubelet[2777]: I0304 00:49:11.706249 2777 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 00:49:11.713143 kubelet[2777]: I0304 00:49:11.713113 2777 policy_none.go:47] "Start" Mar 4 00:49:11.717144 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 00:49:11.721075 kubelet[2777]: E0304 00:49:11.721050 2777 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" Mar 4 00:49:11.727072 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 00:49:11.730543 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 00:49:11.735292 kubelet[2777]: E0304 00:49:11.735265 2777 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:49:11.735473 kubelet[2777]: I0304 00:49:11.735459 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:49:11.735508 kubelet[2777]: I0304 00:49:11.735474 2777 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:49:11.736054 kubelet[2777]: I0304 00:49:11.736015 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:49:11.737597 kubelet[2777]: E0304 00:49:11.737542 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:49:11.737597 kubelet[2777]: E0304 00:49:11.737585 2777 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-32bda88c6e\" not found" Mar 4 00:49:11.772368 systemd[1]: Created slice kubepods-burstable-pod748e03497bfa72039dcd058a7f990b75.slice - libcontainer container kubepods-burstable-pod748e03497bfa72039dcd058a7f990b75.slice. Mar 4 00:49:11.779382 kubelet[2777]: E0304 00:49:11.779344 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.783008 systemd[1]: Created slice kubepods-burstable-poddf335d5dfce2d8c582b87294022d8d12.slice - libcontainer container kubepods-burstable-poddf335d5dfce2d8c582b87294022d8d12.slice. Mar 4 00:49:11.789183 kubelet[2777]: E0304 00:49:11.788861 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.806406 systemd[1]: Created slice kubepods-burstable-pod01d372e9ac0290ca68beb38f765b8e28.slice - libcontainer container kubepods-burstable-pod01d372e9ac0290ca68beb38f765b8e28.slice. Mar 4 00:49:11.807991 kubelet[2777]: E0304 00:49:11.807964 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822519 kubelet[2777]: I0304 00:49:11.822482 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df335d5dfce2d8c582b87294022d8d12-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-32bda88c6e\" (UID: \"df335d5dfce2d8c582b87294022d8d12\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822856 kubelet[2777]: I0304 00:49:11.822520 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822856 kubelet[2777]: I0304 00:49:11.822536 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822856 kubelet[2777]: I0304 00:49:11.822549 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822856 kubelet[2777]: I0304 00:49:11.822577 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822856 kubelet[2777]: I0304 00:49:11.822594 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822977 kubelet[2777]: I0304 00:49:11.822609 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822977 kubelet[2777]: I0304 00:49:11.822621 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.822977 kubelet[2777]: I0304 00:49:11.822635 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.825129 kubelet[2777]: E0304 00:49:11.825095 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32bda88c6e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="400ms" Mar 4 00:49:11.838883 kubelet[2777]: I0304 00:49:11.838853 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:11.839172 kubelet[2777]: E0304 00:49:11.839148 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:12.041037 kubelet[2777]: I0304 00:49:12.040677 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:12.041037 kubelet[2777]: E0304 00:49:12.040965 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:12.089125 containerd[1718]: time="2026-03-04T00:49:12.089036553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-32bda88c6e,Uid:748e03497bfa72039dcd058a7f990b75,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:12.119780 containerd[1718]: time="2026-03-04T00:49:12.119517102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-32bda88c6e,Uid:df335d5dfce2d8c582b87294022d8d12,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:12.132096 containerd[1718]: time="2026-03-04T00:49:12.132056034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-32bda88c6e,Uid:01d372e9ac0290ca68beb38f765b8e28,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:12.226420 kubelet[2777]: E0304 00:49:12.226369 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32bda88c6e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="800ms" Mar 4 00:49:12.442874 kubelet[2777]: I0304 00:49:12.442779 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:12.443210 kubelet[2777]: E0304 00:49:12.443067 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:12.708630 kubelet[2777]: E0304 00:49:12.708598 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:49:12.852381 kubelet[2777]: E0304 00:49:12.852344 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:49:12.854770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805110951.mount: Deactivated successfully. Mar 4 00:49:12.876591 containerd[1718]: time="2026-03-04T00:49:12.876522056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:49:12.890950 containerd[1718]: time="2026-03-04T00:49:12.890916951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 4 00:49:12.894015 containerd[1718]: time="2026-03-04T00:49:12.893986754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:49:12.897332 containerd[1718]: time="2026-03-04T00:49:12.897303438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:49:12.900748 containerd[1718]: time="2026-03-04T00:49:12.900704841Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:49:12.904765 containerd[1718]: time="2026-03-04T00:49:12.904715526Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:49:12.911965 containerd[1718]: time="2026-03-04T00:49:12.911931573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:49:12.916227 containerd[1718]: time="2026-03-04T00:49:12.916195937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:49:12.918584 containerd[1718]: time="2026-03-04T00:49:12.916842498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 797.245955ms" Mar 4 00:49:12.919072 containerd[1718]: time="2026-03-04T00:49:12.919039020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 786.912185ms" Mar 4 00:49:12.925595 containerd[1718]: time="2026-03-04T00:49:12.925536027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 835.686393ms" Mar 4 00:49:13.027536 kubelet[2777]: E0304 00:49:13.027400 2777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32bda88c6e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="1.6s" Mar 4 00:49:13.057347 kubelet[2777]: E0304 00:49:13.057307 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:49:13.194861 kubelet[2777]: E0304 00:49:13.194801 2777 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32bda88c6e&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:49:13.244713 kubelet[2777]: I0304 00:49:13.244682 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:13.244997 kubelet[2777]: E0304 00:49:13.244970 2777 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:13.609646 kubelet[2777]: E0304 00:49:13.609608 2777 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:49:13.708622 containerd[1718]: time="2026-03-04T00:49:13.708023798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:13.708622 containerd[1718]: time="2026-03-04T00:49:13.708167398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:13.708622 containerd[1718]: time="2026-03-04T00:49:13.708219798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.709129 containerd[1718]: time="2026-03-04T00:49:13.709046559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.710422 containerd[1718]: time="2026-03-04T00:49:13.710200320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:13.710422 containerd[1718]: time="2026-03-04T00:49:13.710243000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:13.710422 containerd[1718]: time="2026-03-04T00:49:13.710259080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.710422 containerd[1718]: time="2026-03-04T00:49:13.710337800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.711885 containerd[1718]: time="2026-03-04T00:49:13.711660961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:13.711885 containerd[1718]: time="2026-03-04T00:49:13.711699281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:13.711885 containerd[1718]: time="2026-03-04T00:49:13.711713921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.711885 containerd[1718]: time="2026-03-04T00:49:13.711778282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:13.735792 systemd[1]: Started cri-containerd-a8b718151de179aea0b01be900df4ce7274ca09389831153cd5728f2fb6b5903.scope - libcontainer container a8b718151de179aea0b01be900df4ce7274ca09389831153cd5728f2fb6b5903. Mar 4 00:49:13.736875 systemd[1]: Started cri-containerd-ec4bcdf72fed8ff25c7da0a812cfe8adbb5f5a5e41d9ee1cae4fb75d772d2c3f.scope - libcontainer container ec4bcdf72fed8ff25c7da0a812cfe8adbb5f5a5e41d9ee1cae4fb75d772d2c3f. Mar 4 00:49:13.742257 systemd[1]: Started cri-containerd-c24f68dae3b99bd734bf66aa919ec54cf2dcc93a9ae0a87b3605c7fe89a63829.scope - libcontainer container c24f68dae3b99bd734bf66aa919ec54cf2dcc93a9ae0a87b3605c7fe89a63829. Mar 4 00:49:13.792854 containerd[1718]: time="2026-03-04T00:49:13.791177364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-32bda88c6e,Uid:748e03497bfa72039dcd058a7f990b75,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec4bcdf72fed8ff25c7da0a812cfe8adbb5f5a5e41d9ee1cae4fb75d772d2c3f\"" Mar 4 00:49:13.792975 containerd[1718]: time="2026-03-04T00:49:13.792723805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-32bda88c6e,Uid:01d372e9ac0290ca68beb38f765b8e28,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b718151de179aea0b01be900df4ce7274ca09389831153cd5728f2fb6b5903\"" Mar 4 00:49:13.795861 containerd[1718]: time="2026-03-04T00:49:13.795830089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-32bda88c6e,Uid:df335d5dfce2d8c582b87294022d8d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"c24f68dae3b99bd734bf66aa919ec54cf2dcc93a9ae0a87b3605c7fe89a63829\"" Mar 4 00:49:13.803930 containerd[1718]: time="2026-03-04T00:49:13.803890777Z" level=info msg="CreateContainer within sandbox \"a8b718151de179aea0b01be900df4ce7274ca09389831153cd5728f2fb6b5903\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 00:49:13.809519 containerd[1718]: time="2026-03-04T00:49:13.809400743Z" level=info msg="CreateContainer within sandbox \"ec4bcdf72fed8ff25c7da0a812cfe8adbb5f5a5e41d9ee1cae4fb75d772d2c3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 00:49:13.814991 containerd[1718]: time="2026-03-04T00:49:13.814960388Z" level=info msg="CreateContainer within sandbox \"c24f68dae3b99bd734bf66aa919ec54cf2dcc93a9ae0a87b3605c7fe89a63829\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 00:49:13.888822 containerd[1718]: time="2026-03-04T00:49:13.888633825Z" level=info msg="CreateContainer within sandbox \"a8b718151de179aea0b01be900df4ce7274ca09389831153cd5728f2fb6b5903\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6e7fc6962359584958829eeb8449003c830b10f82cd3d07f0eab83697d20d7f\"" Mar 4 00:49:13.889936 containerd[1718]: time="2026-03-04T00:49:13.889906226Z" level=info msg="StartContainer for \"c6e7fc6962359584958829eeb8449003c830b10f82cd3d07f0eab83697d20d7f\"" Mar 4 00:49:13.894285 containerd[1718]: time="2026-03-04T00:49:13.894190270Z" level=info msg="CreateContainer within sandbox \"ec4bcdf72fed8ff25c7da0a812cfe8adbb5f5a5e41d9ee1cae4fb75d772d2c3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c617339f5a834fde5112fcffbb70bfd522c563a375e0c8485471195563e2a9f2\"" Mar 4 00:49:13.895598 containerd[1718]: time="2026-03-04T00:49:13.894726671Z" level=info msg="StartContainer for \"c617339f5a834fde5112fcffbb70bfd522c563a375e0c8485471195563e2a9f2\"" Mar 4 00:49:13.899992 containerd[1718]: time="2026-03-04T00:49:13.899965156Z" level=info msg="CreateContainer within sandbox \"c24f68dae3b99bd734bf66aa919ec54cf2dcc93a9ae0a87b3605c7fe89a63829\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc25e7518a2aceb6ae940d7b452da32e2b0a70ae8088969cf5eb3c18e6d77b49\"" Mar 4 00:49:13.900803 containerd[1718]: time="2026-03-04T00:49:13.900772677Z" level=info msg="StartContainer for \"dc25e7518a2aceb6ae940d7b452da32e2b0a70ae8088969cf5eb3c18e6d77b49\"" Mar 4 00:49:13.931707 systemd[1]: Started cri-containerd-c6e7fc6962359584958829eeb8449003c830b10f82cd3d07f0eab83697d20d7f.scope - libcontainer container c6e7fc6962359584958829eeb8449003c830b10f82cd3d07f0eab83697d20d7f. Mar 4 00:49:13.935820 systemd[1]: Started cri-containerd-c617339f5a834fde5112fcffbb70bfd522c563a375e0c8485471195563e2a9f2.scope - libcontainer container c617339f5a834fde5112fcffbb70bfd522c563a375e0c8485471195563e2a9f2. Mar 4 00:49:13.945715 systemd[1]: Started cri-containerd-dc25e7518a2aceb6ae940d7b452da32e2b0a70ae8088969cf5eb3c18e6d77b49.scope - libcontainer container dc25e7518a2aceb6ae940d7b452da32e2b0a70ae8088969cf5eb3c18e6d77b49. Mar 4 00:49:13.997684 containerd[1718]: time="2026-03-04T00:49:13.997231977Z" level=info msg="StartContainer for \"c617339f5a834fde5112fcffbb70bfd522c563a375e0c8485471195563e2a9f2\" returns successfully" Mar 4 00:49:13.997894 containerd[1718]: time="2026-03-04T00:49:13.997380337Z" level=info msg="StartContainer for \"c6e7fc6962359584958829eeb8449003c830b10f82cd3d07f0eab83697d20d7f\" returns successfully" Mar 4 00:49:14.006878 containerd[1718]: time="2026-03-04T00:49:14.006657227Z" level=info msg="StartContainer for \"dc25e7518a2aceb6ae940d7b452da32e2b0a70ae8088969cf5eb3c18e6d77b49\" returns successfully" Mar 4 00:49:14.670155 kubelet[2777]: E0304 00:49:14.669931 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:14.675633 kubelet[2777]: E0304 00:49:14.675282 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:14.676608 kubelet[2777]: E0304 00:49:14.676486 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:14.847186 kubelet[2777]: I0304 00:49:14.847039 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:15.678651 kubelet[2777]: E0304 00:49:15.676395 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:15.679766 kubelet[2777]: E0304 00:49:15.679627 2777 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.122055 kubelet[2777]: E0304 00:49:16.122022 2777 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-32bda88c6e\" not found" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.421969 kubelet[2777]: I0304 00:49:16.421480 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.421969 kubelet[2777]: E0304 00:49:16.421518 2777 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-32bda88c6e\": node \"ci-4081.3.6-n-32bda88c6e\" not found" Mar 4 00:49:16.423934 kubelet[2777]: I0304 00:49:16.423828 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.479993 kubelet[2777]: E0304 00:49:16.479784 2777 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.479993 kubelet[2777]: I0304 00:49:16.479815 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.482238 kubelet[2777]: E0304 00:49:16.482048 2777 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-32bda88c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.482238 kubelet[2777]: I0304 00:49:16.482089 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.483550 kubelet[2777]: E0304 00:49:16.483523 2777 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.611124 kubelet[2777]: I0304 00:49:16.611082 2777 apiserver.go:52] "Watching apiserver" Mar 4 00:49:16.621488 kubelet[2777]: I0304 00:49:16.621460 2777 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 00:49:16.676854 kubelet[2777]: I0304 00:49:16.676355 2777 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:16.687832 kubelet[2777]: E0304 00:49:16.687794 2777 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:18.269033 systemd[1]: Reloading requested from client PID 3096 ('systemctl') (unit session-9.scope)... Mar 4 00:49:18.269047 systemd[1]: Reloading... Mar 4 00:49:18.369591 zram_generator::config[3139]: No configuration found. Mar 4 00:49:18.472440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:49:18.562822 systemd[1]: Reloading finished in 293 ms. Mar 4 00:49:18.597918 kubelet[2777]: I0304 00:49:18.597844 2777 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:49:18.598233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:49:18.615753 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 00:49:18.616240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:49:18.616311 systemd[1]: kubelet.service: Consumed 1.116s CPU time, 122.1M memory peak, 0B memory swap peak. Mar 4 00:49:18.626377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:49:18.732976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:49:18.741913 (kubelet)[3200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:49:18.870260 kubelet[3200]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:49:18.870260 kubelet[3200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:49:18.870260 kubelet[3200]: I0304 00:49:18.869993 3200 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:49:18.878026 kubelet[3200]: I0304 00:49:18.877995 3200 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 00:49:18.878026 kubelet[3200]: I0304 00:49:18.878020 3200 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:49:18.878197 kubelet[3200]: I0304 00:49:18.878050 3200 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 00:49:18.878197 kubelet[3200]: I0304 00:49:18.878056 3200 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:49:18.878302 kubelet[3200]: I0304 00:49:18.878286 3200 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:49:18.882376 kubelet[3200]: I0304 00:49:18.880119 3200 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 00:49:18.883852 kubelet[3200]: I0304 00:49:18.883829 3200 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:49:18.887075 kubelet[3200]: E0304 00:49:18.887046 3200 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:49:18.887149 kubelet[3200]: I0304 00:49:18.887096 3200 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 00:49:18.889790 kubelet[3200]: I0304 00:49:18.889774 3200 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 00:49:18.889965 kubelet[3200]: I0304 00:49:18.889942 3200 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:49:18.890096 kubelet[3200]: I0304 00:49:18.889966 3200 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-32bda88c6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 00:49:18.890096 kubelet[3200]: I0304 00:49:18.890096 3200 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:49:18.890197 kubelet[3200]: I0304 00:49:18.890104 3200 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 00:49:18.890197 kubelet[3200]: I0304 00:49:18.890126 3200 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 00:49:18.890298 kubelet[3200]: I0304 00:49:18.890287 3200 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:49:18.890408 kubelet[3200]: I0304 00:49:18.890395 3200 kubelet.go:475] "Attempting to sync node with API server" Mar 4 00:49:18.890442 kubelet[3200]: I0304 00:49:18.890409 3200 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:49:18.890442 kubelet[3200]: I0304 00:49:18.890435 3200 kubelet.go:387] "Adding apiserver pod source" Mar 4 00:49:18.891404 kubelet[3200]: I0304 00:49:18.890447 3200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:49:18.892629 kubelet[3200]: I0304 00:49:18.892599 3200 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:49:18.894417 kubelet[3200]: I0304 00:49:18.894369 3200 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:49:18.894417 kubelet[3200]: I0304 00:49:18.894415 3200 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 00:49:18.898881 kubelet[3200]: I0304 00:49:18.898858 3200 server.go:1262] "Started kubelet" Mar 4 00:49:18.899735 kubelet[3200]: I0304 00:49:18.899672 3200 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:49:18.899801 kubelet[3200]: I0304 00:49:18.899739 3200 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 00:49:18.899959 kubelet[3200]: I0304 00:49:18.899938 3200 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:49:18.901269 kubelet[3200]: I0304 00:49:18.899993 3200 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:49:18.901543 kubelet[3200]: I0304 00:49:18.901528 3200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:49:18.901739 kubelet[3200]: I0304 00:49:18.901726 3200 server.go:310] "Adding debug handlers to kubelet server" Mar 4 00:49:18.907998 kubelet[3200]: I0304 00:49:18.907973 3200 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:49:18.910259 kubelet[3200]: I0304 00:49:18.910244 3200 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 00:49:18.910353 kubelet[3200]: E0304 00:49:18.910339 3200 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32bda88c6e\" not found" Mar 4 00:49:18.910902 kubelet[3200]: I0304 00:49:18.910885 3200 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 00:49:18.911000 kubelet[3200]: I0304 00:49:18.910989 3200 reconciler.go:29] "Reconciler: start to sync state" Mar 4 00:49:18.938696 kubelet[3200]: I0304 00:49:18.938660 3200 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 00:49:18.941576 kubelet[3200]: I0304 00:49:18.940667 3200 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 00:49:18.941576 kubelet[3200]: I0304 00:49:18.940693 3200 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 00:49:18.941576 kubelet[3200]: I0304 00:49:18.940717 3200 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 00:49:18.941576 kubelet[3200]: E0304 00:49:18.940764 3200 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:49:18.954541 kubelet[3200]: I0304 00:49:18.953479 3200 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:49:18.954541 kubelet[3200]: I0304 00:49:18.953612 3200 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:49:18.959809 kubelet[3200]: E0304 00:49:18.957729 3200 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 00:49:18.963040 kubelet[3200]: I0304 00:49:18.963018 3200 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:49:19.007422 kubelet[3200]: I0304 00:49:19.007397 3200 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:49:19.007697 kubelet[3200]: I0304 00:49:19.007681 3200 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:49:19.132726 kubelet[3200]: I0304 00:49:19.008290 3200 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:49:19.132726 kubelet[3200]: E0304 00:49:19.040876 3200 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 00:49:19.134064 kubelet[3200]: I0304 00:49:19.134033 3200 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 00:49:19.134106 kubelet[3200]: I0304 00:49:19.134064 3200 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 00:49:19.134106 kubelet[3200]: I0304 00:49:19.134085 3200 policy_none.go:49] "None policy: Start" Mar 4 00:49:19.134106 kubelet[3200]: I0304 00:49:19.134097 3200 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 00:49:19.134168 kubelet[3200]: I0304 00:49:19.134109 3200 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 00:49:19.134210 kubelet[3200]: I0304 00:49:19.134199 3200 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 00:49:19.134210 kubelet[3200]: I0304 00:49:19.134209 3200 policy_none.go:47] "Start" Mar 4 00:49:19.139299 kubelet[3200]: E0304 00:49:19.138825 3200 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:49:19.139299 kubelet[3200]: I0304 00:49:19.138988 3200 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:49:19.139299 kubelet[3200]: I0304 00:49:19.139000 3200 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:49:19.139299 kubelet[3200]: I0304 00:49:19.139217 3200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:49:19.142429 kubelet[3200]: E0304 00:49:19.142409 3200 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:49:19.241743 kubelet[3200]: I0304 00:49:19.241709 3200 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.241892 kubelet[3200]: I0304 00:49:19.241879 3200 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.242886 kubelet[3200]: I0304 00:49:19.242149 3200 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.250849 kubelet[3200]: I0304 00:49:19.250387 3200 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.251442 kubelet[3200]: I0304 00:49:19.251413 3200 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:49:19.255140 kubelet[3200]: I0304 00:49:19.254960 3200 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:49:19.255140 kubelet[3200]: I0304 00:49:19.254985 3200 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:49:19.263490 kubelet[3200]: I0304 00:49:19.263375 3200 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.263490 kubelet[3200]: I0304 00:49:19.263450 3200 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312699 kubelet[3200]: I0304 00:49:19.312384 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df335d5dfce2d8c582b87294022d8d12-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-32bda88c6e\" (UID: \"df335d5dfce2d8c582b87294022d8d12\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312699 kubelet[3200]: I0304 00:49:19.312420 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312699 kubelet[3200]: I0304 00:49:19.312439 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312699 kubelet[3200]: I0304 00:49:19.312454 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312699 kubelet[3200]: I0304 00:49:19.312477 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312917 kubelet[3200]: I0304 00:49:19.312492 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312917 kubelet[3200]: I0304 00:49:19.312505 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01d372e9ac0290ca68beb38f765b8e28-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32bda88c6e\" (UID: \"01d372e9ac0290ca68beb38f765b8e28\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312917 kubelet[3200]: I0304 00:49:19.312525 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.312917 kubelet[3200]: I0304 00:49:19.312542 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/748e03497bfa72039dcd058a7f990b75-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-32bda88c6e\" (UID: \"748e03497bfa72039dcd058a7f990b75\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:19.893397 kubelet[3200]: I0304 00:49:19.891741 3200 apiserver.go:52] "Watching apiserver" Mar 4 00:49:19.911920 kubelet[3200]: I0304 00:49:19.911885 3200 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 00:49:19.992080 kubelet[3200]: I0304 00:49:19.991828 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32bda88c6e" podStartSLOduration=0.991812226 podStartE2EDuration="991.812226ms" podCreationTimestamp="2026-03-04 00:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:49:19.972095086 +0000 UTC m=+1.225160390" watchObservedRunningTime="2026-03-04 00:49:19.991812226 +0000 UTC m=+1.244877530" Mar 4 00:49:20.009471 kubelet[3200]: I0304 00:49:20.009254 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32bda88c6e" podStartSLOduration=1.009239924 podStartE2EDuration="1.009239924s" podCreationTimestamp="2026-03-04 00:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:49:19.991940546 +0000 UTC m=+1.245005810" watchObservedRunningTime="2026-03-04 00:49:20.009239924 +0000 UTC m=+1.262305228" Mar 4 00:49:20.009471 kubelet[3200]: I0304 00:49:20.009394 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32bda88c6e" podStartSLOduration=1.009390244 podStartE2EDuration="1.009390244s" podCreationTimestamp="2026-03-04 00:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:49:20.007379562 +0000 UTC m=+1.260444866" watchObservedRunningTime="2026-03-04 00:49:20.009390244 +0000 UTC m=+1.262455548" Mar 4 00:49:23.695524 kubelet[3200]: I0304 00:49:23.695492 3200 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 00:49:23.696673 kubelet[3200]: I0304 00:49:23.695966 3200 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 00:49:23.696751 containerd[1718]: time="2026-03-04T00:49:23.695806698Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 00:49:23.837505 kubelet[3200]: I0304 00:49:23.837414 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-lib-modules\") pod \"kube-proxy-ccqzt\" (UID: \"68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa\") " pod="kube-system/kube-proxy-ccqzt" Mar 4 00:49:23.837505 kubelet[3200]: I0304 00:49:23.837450 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vb9f\" (UniqueName: \"kubernetes.io/projected/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-kube-api-access-2vb9f\") pod \"kube-proxy-ccqzt\" (UID: \"68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa\") " pod="kube-system/kube-proxy-ccqzt" Mar 4 00:49:23.837505 kubelet[3200]: I0304 00:49:23.837477 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-kube-proxy\") pod \"kube-proxy-ccqzt\" (UID: \"68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa\") " pod="kube-system/kube-proxy-ccqzt" Mar 4 00:49:23.837694 kubelet[3200]: I0304 00:49:23.837506 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-xtables-lock\") pod \"kube-proxy-ccqzt\" (UID: \"68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa\") " pod="kube-system/kube-proxy-ccqzt" Mar 4 00:49:23.840007 systemd[1]: Created slice kubepods-besteffort-pod68b2cb2c_ead9_4b7e_9d4c_d2ec3614dcaa.slice - libcontainer container kubepods-besteffort-pod68b2cb2c_ead9_4b7e_9d4c_d2ec3614dcaa.slice. Mar 4 00:49:23.946029 kubelet[3200]: E0304 00:49:23.945907 3200 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 4 00:49:23.946029 kubelet[3200]: E0304 00:49:23.945937 3200 projected.go:196] Error preparing data for projected volume kube-api-access-2vb9f for pod kube-system/kube-proxy-ccqzt: configmap "kube-root-ca.crt" not found Mar 4 00:49:23.946029 kubelet[3200]: E0304 00:49:23.946016 3200 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-kube-api-access-2vb9f podName:68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa nodeName:}" failed. No retries permitted until 2026-03-04 00:49:24.445994053 +0000 UTC m=+5.699059357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2vb9f" (UniqueName: "kubernetes.io/projected/68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa-kube-api-access-2vb9f") pod "kube-proxy-ccqzt" (UID: "68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa") : configmap "kube-root-ca.crt" not found Mar 4 00:49:24.756126 containerd[1718]: time="2026-03-04T00:49:24.755779381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccqzt,Uid:68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:24.811485 containerd[1718]: time="2026-03-04T00:49:24.811388002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:24.811485 containerd[1718]: time="2026-03-04T00:49:24.811449002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:24.811749 containerd[1718]: time="2026-03-04T00:49:24.811460362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:24.811932 containerd[1718]: time="2026-03-04T00:49:24.811829403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:24.829761 systemd[1]: Started cri-containerd-9d836095a4b3a400071532aeb7ba6e0bead867bf7be94184ff6243384c1ea4a7.scope - libcontainer container 9d836095a4b3a400071532aeb7ba6e0bead867bf7be94184ff6243384c1ea4a7. Mar 4 00:49:24.855859 containerd[1718]: time="2026-03-04T00:49:24.853875849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccqzt,Uid:68b2cb2c-ead9-4b7e-9d4c-d2ec3614dcaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d836095a4b3a400071532aeb7ba6e0bead867bf7be94184ff6243384c1ea4a7\"" Mar 4 00:49:24.869446 containerd[1718]: time="2026-03-04T00:49:24.869245306Z" level=info msg="CreateContainer within sandbox \"9d836095a4b3a400071532aeb7ba6e0bead867bf7be94184ff6243384c1ea4a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 00:49:24.934547 containerd[1718]: time="2026-03-04T00:49:24.934417737Z" level=info msg="CreateContainer within sandbox \"9d836095a4b3a400071532aeb7ba6e0bead867bf7be94184ff6243384c1ea4a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7ffcfc6737977c64a23a2d873565f3e3cca259427f7dda9d28a64875cacdeb8\"" Mar 4 00:49:24.934547 containerd[1718]: time="2026-03-04T00:49:24.934880058Z" level=info msg="StartContainer for \"c7ffcfc6737977c64a23a2d873565f3e3cca259427f7dda9d28a64875cacdeb8\"" Mar 4 00:49:24.943883 kubelet[3200]: I0304 00:49:24.942975 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wnhq\" (UniqueName: \"kubernetes.io/projected/ba39f8b6-6c3d-4008-b8ea-d24911e0c70e-kube-api-access-6wnhq\") pod \"tigera-operator-5588576f44-ps9r7\" (UID: \"ba39f8b6-6c3d-4008-b8ea-d24911e0c70e\") " pod="tigera-operator/tigera-operator-5588576f44-ps9r7" Mar 4 00:49:24.943883 kubelet[3200]: I0304 00:49:24.943018 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba39f8b6-6c3d-4008-b8ea-d24911e0c70e-var-lib-calico\") pod \"tigera-operator-5588576f44-ps9r7\" (UID: \"ba39f8b6-6c3d-4008-b8ea-d24911e0c70e\") " pod="tigera-operator/tigera-operator-5588576f44-ps9r7" Mar 4 00:49:24.951100 systemd[1]: Created slice kubepods-besteffort-podba39f8b6_6c3d_4008_b8ea_d24911e0c70e.slice - libcontainer container kubepods-besteffort-podba39f8b6_6c3d_4008_b8ea_d24911e0c70e.slice. Mar 4 00:49:24.971813 systemd[1]: Started cri-containerd-c7ffcfc6737977c64a23a2d873565f3e3cca259427f7dda9d28a64875cacdeb8.scope - libcontainer container c7ffcfc6737977c64a23a2d873565f3e3cca259427f7dda9d28a64875cacdeb8. Mar 4 00:49:24.999751 containerd[1718]: time="2026-03-04T00:49:24.999710809Z" level=info msg="StartContainer for \"c7ffcfc6737977c64a23a2d873565f3e3cca259427f7dda9d28a64875cacdeb8\" returns successfully" Mar 4 00:49:25.263243 containerd[1718]: time="2026-03-04T00:49:25.262890818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-ps9r7,Uid:ba39f8b6-6c3d-4008-b8ea-d24911e0c70e,Namespace:tigera-operator,Attempt:0,}" Mar 4 00:49:25.306365 containerd[1718]: time="2026-03-04T00:49:25.306287146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:25.306504 containerd[1718]: time="2026-03-04T00:49:25.306389906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:25.306504 containerd[1718]: time="2026-03-04T00:49:25.306406746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:25.306596 containerd[1718]: time="2026-03-04T00:49:25.306524306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:25.320737 systemd[1]: Started cri-containerd-ae55e2e85abaa69c00c19a090c3aa9ac615ce3cd32be922928af55b6e58617b9.scope - libcontainer container ae55e2e85abaa69c00c19a090c3aa9ac615ce3cd32be922928af55b6e58617b9. Mar 4 00:49:25.353532 containerd[1718]: time="2026-03-04T00:49:25.353492477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-ps9r7,Uid:ba39f8b6-6c3d-4008-b8ea-d24911e0c70e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae55e2e85abaa69c00c19a090c3aa9ac615ce3cd32be922928af55b6e58617b9\"" Mar 4 00:49:25.356793 containerd[1718]: time="2026-03-04T00:49:25.356739641Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 00:49:27.108776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441703380.mount: Deactivated successfully. Mar 4 00:49:27.518601 containerd[1718]: time="2026-03-04T00:49:27.518103933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:27.522420 containerd[1718]: time="2026-03-04T00:49:27.522270578Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 4 00:49:27.526927 containerd[1718]: time="2026-03-04T00:49:27.526621702Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:27.532664 containerd[1718]: time="2026-03-04T00:49:27.532620789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:27.533542 containerd[1718]: time="2026-03-04T00:49:27.533514510Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.176739229s" Mar 4 00:49:27.533667 containerd[1718]: time="2026-03-04T00:49:27.533649310Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 4 00:49:27.542052 containerd[1718]: time="2026-03-04T00:49:27.542014959Z" level=info msg="CreateContainer within sandbox \"ae55e2e85abaa69c00c19a090c3aa9ac615ce3cd32be922928af55b6e58617b9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 00:49:27.583186 containerd[1718]: time="2026-03-04T00:49:27.583070684Z" level=info msg="CreateContainer within sandbox \"ae55e2e85abaa69c00c19a090c3aa9ac615ce3cd32be922928af55b6e58617b9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"236a3442db10c0a17b924498ca2bb5fb50a4ec51afc5c60ff80d1a1247db7338\"" Mar 4 00:49:27.584756 containerd[1718]: time="2026-03-04T00:49:27.584718366Z" level=info msg="StartContainer for \"236a3442db10c0a17b924498ca2bb5fb50a4ec51afc5c60ff80d1a1247db7338\"" Mar 4 00:49:27.608719 systemd[1]: Started cri-containerd-236a3442db10c0a17b924498ca2bb5fb50a4ec51afc5c60ff80d1a1247db7338.scope - libcontainer container 236a3442db10c0a17b924498ca2bb5fb50a4ec51afc5c60ff80d1a1247db7338. Mar 4 00:49:27.635219 containerd[1718]: time="2026-03-04T00:49:27.635152502Z" level=info msg="StartContainer for \"236a3442db10c0a17b924498ca2bb5fb50a4ec51afc5c60ff80d1a1247db7338\" returns successfully" Mar 4 00:49:28.029179 kubelet[3200]: I0304 00:49:28.029120 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ccqzt" podStartSLOduration=5.029105694 podStartE2EDuration="5.029105694s" podCreationTimestamp="2026-03-04 00:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:49:26.019471008 +0000 UTC m=+7.272536312" watchObservedRunningTime="2026-03-04 00:49:28.029105694 +0000 UTC m=+9.282170998" Mar 4 00:49:28.029595 kubelet[3200]: I0304 00:49:28.029211 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-ps9r7" podStartSLOduration=1.849494862 podStartE2EDuration="4.029207214s" podCreationTimestamp="2026-03-04 00:49:24 +0000 UTC" firstStartedPulling="2026-03-04 00:49:25.354908679 +0000 UTC m=+6.607974023" lastFinishedPulling="2026-03-04 00:49:27.534621111 +0000 UTC m=+8.787686375" observedRunningTime="2026-03-04 00:49:28.028218733 +0000 UTC m=+9.281284077" watchObservedRunningTime="2026-03-04 00:49:28.029207214 +0000 UTC m=+9.282272598" Mar 4 00:49:33.434098 sudo[2210]: pam_unix(sudo:session): session closed for user root Mar 4 00:49:33.513774 sshd[2207]: pam_unix(sshd:session): session closed for user core Mar 4 00:49:33.517803 systemd[1]: sshd@6-10.200.20.22:22-10.200.16.10:38566.service: Deactivated successfully. Mar 4 00:49:33.520988 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 00:49:33.521208 systemd[1]: session-9.scope: Consumed 5.946s CPU time, 152.3M memory peak, 0B memory swap peak. Mar 4 00:49:33.522198 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Mar 4 00:49:33.523760 systemd-logind[1694]: Removed session 9. Mar 4 00:49:39.411426 systemd[1]: Created slice kubepods-besteffort-pod44dde97c_45c5_4e1b_a21f_908900cd7715.slice - libcontainer container kubepods-besteffort-pod44dde97c_45c5_4e1b_a21f_908900cd7715.slice. Mar 4 00:49:39.431149 kubelet[3200]: I0304 00:49:39.431115 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44dde97c-45c5-4e1b-a21f-908900cd7715-tigera-ca-bundle\") pod \"calico-typha-7857cc676d-d8ll2\" (UID: \"44dde97c-45c5-4e1b-a21f-908900cd7715\") " pod="calico-system/calico-typha-7857cc676d-d8ll2" Mar 4 00:49:39.432504 kubelet[3200]: I0304 00:49:39.431507 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44dde97c-45c5-4e1b-a21f-908900cd7715-typha-certs\") pod \"calico-typha-7857cc676d-d8ll2\" (UID: \"44dde97c-45c5-4e1b-a21f-908900cd7715\") " pod="calico-system/calico-typha-7857cc676d-d8ll2" Mar 4 00:49:39.432504 kubelet[3200]: I0304 00:49:39.431538 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxwfv\" (UniqueName: \"kubernetes.io/projected/44dde97c-45c5-4e1b-a21f-908900cd7715-kube-api-access-mxwfv\") pod \"calico-typha-7857cc676d-d8ll2\" (UID: \"44dde97c-45c5-4e1b-a21f-908900cd7715\") " pod="calico-system/calico-typha-7857cc676d-d8ll2" Mar 4 00:49:39.504831 systemd[1]: Created slice kubepods-besteffort-pod8c3e84ef_4900_4c5b_bf91_31415fb6f9a8.slice - libcontainer container kubepods-besteffort-pod8c3e84ef_4900_4c5b_bf91_31415fb6f9a8.slice. Mar 4 00:49:39.533396 kubelet[3200]: I0304 00:49:39.532237 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-flexvol-driver-host\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533396 kubelet[3200]: I0304 00:49:39.532293 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-bpffs\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533396 kubelet[3200]: I0304 00:49:39.532308 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-node-certs\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533396 kubelet[3200]: I0304 00:49:39.532323 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-nodeproc\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533396 kubelet[3200]: I0304 00:49:39.532340 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-cni-bin-dir\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533638 kubelet[3200]: I0304 00:49:39.532352 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-policysync\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533638 kubelet[3200]: I0304 00:49:39.532365 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-lib-modules\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533638 kubelet[3200]: I0304 00:49:39.532379 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-var-run-calico\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533638 kubelet[3200]: I0304 00:49:39.532393 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-sys-fs\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533638 kubelet[3200]: I0304 00:49:39.532408 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-cni-net-dir\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533740 kubelet[3200]: I0304 00:49:39.532422 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-tigera-ca-bundle\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533740 kubelet[3200]: I0304 00:49:39.532435 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsgv\" (UniqueName: \"kubernetes.io/projected/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-kube-api-access-tfsgv\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533740 kubelet[3200]: I0304 00:49:39.532449 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-var-lib-calico\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533740 kubelet[3200]: I0304 00:49:39.532478 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-cni-log-dir\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.533740 kubelet[3200]: I0304 00:49:39.532492 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3e84ef-4900-4c5b-bf91-31415fb6f9a8-xtables-lock\") pod \"calico-node-hf4ln\" (UID: \"8c3e84ef-4900-4c5b-bf91-31415fb6f9a8\") " pod="calico-system/calico-node-hf4ln" Mar 4 00:49:39.606495 kubelet[3200]: E0304 00:49:39.606454 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:39.636590 kubelet[3200]: I0304 00:49:39.633720 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e7af50d6-af42-44be-9485-418ddf8e697a-socket-dir\") pod \"csi-node-driver-c52z8\" (UID: \"e7af50d6-af42-44be-9485-418ddf8e697a\") " pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:39.636590 kubelet[3200]: I0304 00:49:39.633775 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e7af50d6-af42-44be-9485-418ddf8e697a-registration-dir\") pod \"csi-node-driver-c52z8\" (UID: \"e7af50d6-af42-44be-9485-418ddf8e697a\") " pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:39.636590 kubelet[3200]: I0304 00:49:39.633792 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e7af50d6-af42-44be-9485-418ddf8e697a-varrun\") pod \"csi-node-driver-c52z8\" (UID: \"e7af50d6-af42-44be-9485-418ddf8e697a\") " pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:39.636590 kubelet[3200]: I0304 00:49:39.634491 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7af50d6-af42-44be-9485-418ddf8e697a-kubelet-dir\") pod \"csi-node-driver-c52z8\" (UID: \"e7af50d6-af42-44be-9485-418ddf8e697a\") " pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:39.636590 kubelet[3200]: I0304 00:49:39.634608 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkwx\" (UniqueName: \"kubernetes.io/projected/e7af50d6-af42-44be-9485-418ddf8e697a-kube-api-access-vmkwx\") pod \"csi-node-driver-c52z8\" (UID: \"e7af50d6-af42-44be-9485-418ddf8e697a\") " pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:39.637974 kubelet[3200]: E0304 00:49:39.637954 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.638079 kubelet[3200]: W0304 00:49:39.638065 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.638138 kubelet[3200]: E0304 00:49:39.638128 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.638342 kubelet[3200]: E0304 00:49:39.638331 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.638408 kubelet[3200]: W0304 00:49:39.638398 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.638530 kubelet[3200]: E0304 00:49:39.638461 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.638697 kubelet[3200]: E0304 00:49:39.638687 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.639414 kubelet[3200]: W0304 00:49:39.638755 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.639414 kubelet[3200]: E0304 00:49:39.638769 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.639654 kubelet[3200]: E0304 00:49:39.639641 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.639712 kubelet[3200]: W0304 00:49:39.639702 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.639781 kubelet[3200]: E0304 00:49:39.639771 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.640601 kubelet[3200]: E0304 00:49:39.640174 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.640601 kubelet[3200]: W0304 00:49:39.640473 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.640601 kubelet[3200]: E0304 00:49:39.640491 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.641094 kubelet[3200]: E0304 00:49:39.640997 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.641266 kubelet[3200]: W0304 00:49:39.641251 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.641568 kubelet[3200]: E0304 00:49:39.641410 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.641946 kubelet[3200]: E0304 00:49:39.641920 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.642367 kubelet[3200]: W0304 00:49:39.642106 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.642367 kubelet[3200]: E0304 00:49:39.642126 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.642994 kubelet[3200]: E0304 00:49:39.642872 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.643284 kubelet[3200]: W0304 00:49:39.643083 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.643284 kubelet[3200]: E0304 00:49:39.643103 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.644101 kubelet[3200]: E0304 00:49:39.643690 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.644101 kubelet[3200]: W0304 00:49:39.643704 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.644101 kubelet[3200]: E0304 00:49:39.643811 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.646042 kubelet[3200]: E0304 00:49:39.645921 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.646042 kubelet[3200]: W0304 00:49:39.645938 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.646042 kubelet[3200]: E0304 00:49:39.645950 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.646636 kubelet[3200]: E0304 00:49:39.646538 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.647001 kubelet[3200]: W0304 00:49:39.646803 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.647001 kubelet[3200]: E0304 00:49:39.646824 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.647412 kubelet[3200]: E0304 00:49:39.647396 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.647687 kubelet[3200]: W0304 00:49:39.647520 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.647687 kubelet[3200]: E0304 00:49:39.647537 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.648238 kubelet[3200]: E0304 00:49:39.648042 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.648238 kubelet[3200]: W0304 00:49:39.648055 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.648238 kubelet[3200]: E0304 00:49:39.648068 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.648954 kubelet[3200]: E0304 00:49:39.648639 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.648954 kubelet[3200]: W0304 00:49:39.648653 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.648954 kubelet[3200]: E0304 00:49:39.648666 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.649357 kubelet[3200]: E0304 00:49:39.649255 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.649578 kubelet[3200]: W0304 00:49:39.649456 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.649578 kubelet[3200]: E0304 00:49:39.649478 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.650241 kubelet[3200]: E0304 00:49:39.650057 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.650241 kubelet[3200]: W0304 00:49:39.650070 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.650241 kubelet[3200]: E0304 00:49:39.650083 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.650674 kubelet[3200]: E0304 00:49:39.650628 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.651071 kubelet[3200]: W0304 00:49:39.650814 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.651071 kubelet[3200]: E0304 00:49:39.650833 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.651430 kubelet[3200]: E0304 00:49:39.651409 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.651651 kubelet[3200]: W0304 00:49:39.651632 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.651895 kubelet[3200]: E0304 00:49:39.651749 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.652464 kubelet[3200]: E0304 00:49:39.652258 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.652464 kubelet[3200]: W0304 00:49:39.652270 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.652464 kubelet[3200]: E0304 00:49:39.652282 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.653075 kubelet[3200]: E0304 00:49:39.652888 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.653075 kubelet[3200]: W0304 00:49:39.652903 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.653075 kubelet[3200]: E0304 00:49:39.652915 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.653622 kubelet[3200]: E0304 00:49:39.653427 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.653622 kubelet[3200]: W0304 00:49:39.653484 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.653622 kubelet[3200]: E0304 00:49:39.653500 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.653915 kubelet[3200]: E0304 00:49:39.653755 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.653915 kubelet[3200]: W0304 00:49:39.653768 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.653915 kubelet[3200]: E0304 00:49:39.653780 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.653996 kubelet[3200]: E0304 00:49:39.653968 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.653996 kubelet[3200]: W0304 00:49:39.653977 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.653996 kubelet[3200]: E0304 00:49:39.653987 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.668312 kubelet[3200]: E0304 00:49:39.667766 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.668312 kubelet[3200]: W0304 00:49:39.667784 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.668312 kubelet[3200]: E0304 00:49:39.667801 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.728306 containerd[1718]: time="2026-03-04T00:49:39.727963887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7857cc676d-d8ll2,Uid:44dde97c-45c5-4e1b-a21f-908900cd7715,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:39.735237 kubelet[3200]: E0304 00:49:39.735216 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.735363 kubelet[3200]: W0304 00:49:39.735349 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.735432 kubelet[3200]: E0304 00:49:39.735420 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.735696 kubelet[3200]: E0304 00:49:39.735684 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.735772 kubelet[3200]: W0304 00:49:39.735761 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.735834 kubelet[3200]: E0304 00:49:39.735824 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.736124 kubelet[3200]: E0304 00:49:39.736111 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.736241 kubelet[3200]: W0304 00:49:39.736192 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.736241 kubelet[3200]: E0304 00:49:39.736208 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.736432 kubelet[3200]: E0304 00:49:39.736414 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.736432 kubelet[3200]: W0304 00:49:39.736428 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.736507 kubelet[3200]: E0304 00:49:39.736440 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.736620 kubelet[3200]: E0304 00:49:39.736607 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.736620 kubelet[3200]: W0304 00:49:39.736618 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.736687 kubelet[3200]: E0304 00:49:39.736629 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.736799 kubelet[3200]: E0304 00:49:39.736787 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.736799 kubelet[3200]: W0304 00:49:39.736798 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.736856 kubelet[3200]: E0304 00:49:39.736808 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.737021 kubelet[3200]: E0304 00:49:39.737008 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.737021 kubelet[3200]: W0304 00:49:39.737019 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.737077 kubelet[3200]: E0304 00:49:39.737028 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.737891 kubelet[3200]: E0304 00:49:39.737852 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.738426 kubelet[3200]: W0304 00:49:39.737993 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.738426 kubelet[3200]: E0304 00:49:39.738017 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.738880 kubelet[3200]: E0304 00:49:39.738861 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.739107 kubelet[3200]: W0304 00:49:39.739051 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.739107 kubelet[3200]: E0304 00:49:39.739071 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.739787 kubelet[3200]: E0304 00:49:39.739663 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.739787 kubelet[3200]: W0304 00:49:39.739677 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.739787 kubelet[3200]: E0304 00:49:39.739689 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.740171 kubelet[3200]: E0304 00:49:39.740157 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.740319 kubelet[3200]: W0304 00:49:39.740229 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.740319 kubelet[3200]: E0304 00:49:39.740244 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.740634 kubelet[3200]: E0304 00:49:39.740616 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.740798 kubelet[3200]: W0304 00:49:39.740684 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.740798 kubelet[3200]: E0304 00:49:39.740698 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.741114 kubelet[3200]: E0304 00:49:39.740874 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.741114 kubelet[3200]: W0304 00:49:39.740888 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.741114 kubelet[3200]: E0304 00:49:39.740900 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.741839 kubelet[3200]: E0304 00:49:39.741630 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.741839 kubelet[3200]: W0304 00:49:39.741643 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.741839 kubelet[3200]: E0304 00:49:39.741654 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.743330 kubelet[3200]: E0304 00:49:39.743316 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.743498 kubelet[3200]: W0304 00:49:39.743405 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.743498 kubelet[3200]: E0304 00:49:39.743419 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.743970 kubelet[3200]: E0304 00:49:39.743850 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.743970 kubelet[3200]: W0304 00:49:39.743864 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.743970 kubelet[3200]: E0304 00:49:39.743878 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.744681 kubelet[3200]: E0304 00:49:39.744534 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.744681 kubelet[3200]: W0304 00:49:39.744551 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.744681 kubelet[3200]: E0304 00:49:39.744599 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.745278 kubelet[3200]: E0304 00:49:39.745264 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.745364 kubelet[3200]: W0304 00:49:39.745352 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.745440 kubelet[3200]: E0304 00:49:39.745430 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.745686 kubelet[3200]: E0304 00:49:39.745672 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.745767 kubelet[3200]: W0304 00:49:39.745756 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.745826 kubelet[3200]: E0304 00:49:39.745815 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.746244 kubelet[3200]: E0304 00:49:39.746230 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.746404 kubelet[3200]: W0304 00:49:39.746327 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.746404 kubelet[3200]: E0304 00:49:39.746342 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.746814 kubelet[3200]: E0304 00:49:39.746734 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.746814 kubelet[3200]: W0304 00:49:39.746746 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.746814 kubelet[3200]: E0304 00:49:39.746764 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.748775 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.751222 kubelet[3200]: W0304 00:49:39.748786 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.748798 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.749940 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.751222 kubelet[3200]: W0304 00:49:39.749950 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.749961 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.750627 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.751222 kubelet[3200]: W0304 00:49:39.750638 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.751222 kubelet[3200]: E0304 00:49:39.750650 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.751745 kubelet[3200]: E0304 00:49:39.751583 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.751745 kubelet[3200]: W0304 00:49:39.751597 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.751745 kubelet[3200]: E0304 00:49:39.751610 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.752072 kubelet[3200]: E0304 00:49:39.751870 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:39.752072 kubelet[3200]: W0304 00:49:39.751882 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:39.752072 kubelet[3200]: E0304 00:49:39.751924 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:39.772890 containerd[1718]: time="2026-03-04T00:49:39.772694089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:39.772890 containerd[1718]: time="2026-03-04T00:49:39.772741889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:39.772890 containerd[1718]: time="2026-03-04T00:49:39.772762329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:39.772890 containerd[1718]: time="2026-03-04T00:49:39.772833729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:39.792735 systemd[1]: Started cri-containerd-196ea783da1f55d76bdcad896a78583968c28a0893829152e7e9ea3e082e9c99.scope - libcontainer container 196ea783da1f55d76bdcad896a78583968c28a0893829152e7e9ea3e082e9c99. Mar 4 00:49:39.815595 containerd[1718]: time="2026-03-04T00:49:39.815073209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hf4ln,Uid:8c3e84ef-4900-4c5b-bf91-31415fb6f9a8,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:39.819694 containerd[1718]: time="2026-03-04T00:49:39.819659454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7857cc676d-d8ll2,Uid:44dde97c-45c5-4e1b-a21f-908900cd7715,Namespace:calico-system,Attempt:0,} returns sandbox id \"196ea783da1f55d76bdcad896a78583968c28a0893829152e7e9ea3e082e9c99\"" Mar 4 00:49:39.822081 containerd[1718]: time="2026-03-04T00:49:39.822054696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 00:49:39.863983 containerd[1718]: time="2026-03-04T00:49:39.863670936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:39.863983 containerd[1718]: time="2026-03-04T00:49:39.863720736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:39.863983 containerd[1718]: time="2026-03-04T00:49:39.863735496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:39.863983 containerd[1718]: time="2026-03-04T00:49:39.863801696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:39.882825 systemd[1]: Started cri-containerd-c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987.scope - libcontainer container c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987. Mar 4 00:49:39.903457 containerd[1718]: time="2026-03-04T00:49:39.903423813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hf4ln,Uid:8c3e84ef-4900-4c5b-bf91-31415fb6f9a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\"" Mar 4 00:49:40.944644 kubelet[3200]: E0304 00:49:40.943021 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:41.073285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920853094.mount: Deactivated successfully. Mar 4 00:49:41.622587 containerd[1718]: time="2026-03-04T00:49:41.622383362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:41.625465 containerd[1718]: time="2026-03-04T00:49:41.625437365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 4 00:49:41.633720 containerd[1718]: time="2026-03-04T00:49:41.632966892Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:41.640376 containerd[1718]: time="2026-03-04T00:49:41.640339099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:41.641277 containerd[1718]: time="2026-03-04T00:49:41.641249660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.819162844s" Mar 4 00:49:41.641328 containerd[1718]: time="2026-03-04T00:49:41.641278940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 4 00:49:41.643185 containerd[1718]: time="2026-03-04T00:49:41.642966821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 00:49:41.662153 containerd[1718]: time="2026-03-04T00:49:41.662112520Z" level=info msg="CreateContainer within sandbox \"196ea783da1f55d76bdcad896a78583968c28a0893829152e7e9ea3e082e9c99\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 00:49:41.699178 containerd[1718]: time="2026-03-04T00:49:41.699046075Z" level=info msg="CreateContainer within sandbox \"196ea783da1f55d76bdcad896a78583968c28a0893829152e7e9ea3e082e9c99\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"68dfbfb2a6e63186e095bb89ab0935886c9daf85dcf4f188f3ca1175e240e6e1\"" Mar 4 00:49:41.700813 containerd[1718]: time="2026-03-04T00:49:41.700781196Z" level=info msg="StartContainer for \"68dfbfb2a6e63186e095bb89ab0935886c9daf85dcf4f188f3ca1175e240e6e1\"" Mar 4 00:49:41.728716 systemd[1]: Started cri-containerd-68dfbfb2a6e63186e095bb89ab0935886c9daf85dcf4f188f3ca1175e240e6e1.scope - libcontainer container 68dfbfb2a6e63186e095bb89ab0935886c9daf85dcf4f188f3ca1175e240e6e1. Mar 4 00:49:41.770776 containerd[1718]: time="2026-03-04T00:49:41.770727303Z" level=info msg="StartContainer for \"68dfbfb2a6e63186e095bb89ab0935886c9daf85dcf4f188f3ca1175e240e6e1\" returns successfully" Mar 4 00:49:42.131753 kubelet[3200]: E0304 00:49:42.131642 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.131753 kubelet[3200]: W0304 00:49:42.131664 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.131753 kubelet[3200]: E0304 00:49:42.131682 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.132507 kubelet[3200]: E0304 00:49:42.132203 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.132507 kubelet[3200]: W0304 00:49:42.132217 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.132507 kubelet[3200]: E0304 00:49:42.132264 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.133196 kubelet[3200]: E0304 00:49:42.132699 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.133196 kubelet[3200]: W0304 00:49:42.132712 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.133196 kubelet[3200]: E0304 00:49:42.132724 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.133503 kubelet[3200]: E0304 00:49:42.133379 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.133503 kubelet[3200]: W0304 00:49:42.133394 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.133503 kubelet[3200]: E0304 00:49:42.133414 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.133869 kubelet[3200]: E0304 00:49:42.133766 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.133869 kubelet[3200]: W0304 00:49:42.133779 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.133869 kubelet[3200]: E0304 00:49:42.133790 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.134265 kubelet[3200]: E0304 00:49:42.134159 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.134265 kubelet[3200]: W0304 00:49:42.134180 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.134265 kubelet[3200]: E0304 00:49:42.134190 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.134572 kubelet[3200]: E0304 00:49:42.134513 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.134572 kubelet[3200]: W0304 00:49:42.134524 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.134572 kubelet[3200]: E0304 00:49:42.134534 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.135024 kubelet[3200]: E0304 00:49:42.134912 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.135024 kubelet[3200]: W0304 00:49:42.134924 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.135024 kubelet[3200]: E0304 00:49:42.134935 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.135328 kubelet[3200]: E0304 00:49:42.135239 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.135328 kubelet[3200]: W0304 00:49:42.135250 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.135328 kubelet[3200]: E0304 00:49:42.135260 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.135538 kubelet[3200]: E0304 00:49:42.135488 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.135538 kubelet[3200]: W0304 00:49:42.135498 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.135538 kubelet[3200]: E0304 00:49:42.135508 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.135871 kubelet[3200]: E0304 00:49:42.135799 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.135871 kubelet[3200]: W0304 00:49:42.135820 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.135871 kubelet[3200]: E0304 00:49:42.135830 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.136183 kubelet[3200]: E0304 00:49:42.136123 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.136183 kubelet[3200]: W0304 00:49:42.136133 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.136183 kubelet[3200]: E0304 00:49:42.136142 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.136520 kubelet[3200]: E0304 00:49:42.136459 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.136520 kubelet[3200]: W0304 00:49:42.136470 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.136520 kubelet[3200]: E0304 00:49:42.136479 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.136887 kubelet[3200]: E0304 00:49:42.136802 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.136887 kubelet[3200]: W0304 00:49:42.136813 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.136887 kubelet[3200]: E0304 00:49:42.136822 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.137346 kubelet[3200]: E0304 00:49:42.137095 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.137346 kubelet[3200]: W0304 00:49:42.137114 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.137346 kubelet[3200]: E0304 00:49:42.137123 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.156946 kubelet[3200]: E0304 00:49:42.156917 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.156946 kubelet[3200]: W0304 00:49:42.156938 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.157180 kubelet[3200]: E0304 00:49:42.156956 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.157265 kubelet[3200]: E0304 00:49:42.157253 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.157265 kubelet[3200]: W0304 00:49:42.157263 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.157353 kubelet[3200]: E0304 00:49:42.157271 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.157521 kubelet[3200]: E0304 00:49:42.157506 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.157521 kubelet[3200]: W0304 00:49:42.157519 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.157602 kubelet[3200]: E0304 00:49:42.157528 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.157778 kubelet[3200]: E0304 00:49:42.157764 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.157778 kubelet[3200]: W0304 00:49:42.157776 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.157935 kubelet[3200]: E0304 00:49:42.157785 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158013 kubelet[3200]: E0304 00:49:42.158003 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158013 kubelet[3200]: W0304 00:49:42.158012 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.158169 kubelet[3200]: E0304 00:49:42.158019 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158250 kubelet[3200]: E0304 00:49:42.158238 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158250 kubelet[3200]: W0304 00:49:42.158249 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.158324 kubelet[3200]: E0304 00:49:42.158256 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158472 kubelet[3200]: E0304 00:49:42.158461 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158472 kubelet[3200]: W0304 00:49:42.158470 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.158776 kubelet[3200]: E0304 00:49:42.158478 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158776 kubelet[3200]: E0304 00:49:42.158644 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158776 kubelet[3200]: W0304 00:49:42.158652 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.158776 kubelet[3200]: E0304 00:49:42.158660 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158880 kubelet[3200]: E0304 00:49:42.158802 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158880 kubelet[3200]: W0304 00:49:42.158809 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.158880 kubelet[3200]: E0304 00:49:42.158817 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.158996 kubelet[3200]: E0304 00:49:42.158983 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.158996 kubelet[3200]: W0304 00:49:42.158994 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.159158 kubelet[3200]: E0304 00:49:42.159003 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.159239 kubelet[3200]: E0304 00:49:42.159227 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.159239 kubelet[3200]: W0304 00:49:42.159238 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.159314 kubelet[3200]: E0304 00:49:42.159246 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.159471 kubelet[3200]: E0304 00:49:42.159460 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.159471 kubelet[3200]: W0304 00:49:42.159470 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.159538 kubelet[3200]: E0304 00:49:42.159478 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.159894 kubelet[3200]: E0304 00:49:42.159777 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.159894 kubelet[3200]: W0304 00:49:42.159792 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.159894 kubelet[3200]: E0304 00:49:42.159804 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.160131 kubelet[3200]: E0304 00:49:42.160066 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.160131 kubelet[3200]: W0304 00:49:42.160077 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.160131 kubelet[3200]: E0304 00:49:42.160087 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.160453 kubelet[3200]: E0304 00:49:42.160369 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.160453 kubelet[3200]: W0304 00:49:42.160381 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.160453 kubelet[3200]: E0304 00:49:42.160391 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.160883 kubelet[3200]: E0304 00:49:42.160739 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.160883 kubelet[3200]: W0304 00:49:42.160752 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.160883 kubelet[3200]: E0304 00:49:42.160764 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.161214 kubelet[3200]: E0304 00:49:42.160989 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.161214 kubelet[3200]: W0304 00:49:42.161003 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.161214 kubelet[3200]: E0304 00:49:42.161014 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.161482 kubelet[3200]: E0304 00:49:42.161470 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:49:42.161549 kubelet[3200]: W0304 00:49:42.161538 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:49:42.161619 kubelet[3200]: E0304 00:49:42.161608 3200 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:49:42.903403 containerd[1718]: time="2026-03-04T00:49:42.902631855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:42.906437 containerd[1718]: time="2026-03-04T00:49:42.906410819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 4 00:49:42.910441 containerd[1718]: time="2026-03-04T00:49:42.910415862Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:42.915709 containerd[1718]: time="2026-03-04T00:49:42.915662227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:42.916470 containerd[1718]: time="2026-03-04T00:49:42.916439828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.273438687s" Mar 4 00:49:42.916553 containerd[1718]: time="2026-03-04T00:49:42.916537748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 4 00:49:42.925625 containerd[1718]: time="2026-03-04T00:49:42.925592797Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 00:49:42.941622 kubelet[3200]: E0304 00:49:42.941583 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:42.989094 containerd[1718]: time="2026-03-04T00:49:42.989051097Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b\"" Mar 4 00:49:42.990720 containerd[1718]: time="2026-03-04T00:49:42.990030418Z" level=info msg="StartContainer for \"5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b\"" Mar 4 00:49:43.023728 systemd[1]: Started cri-containerd-5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b.scope - libcontainer container 5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b. Mar 4 00:49:43.053282 containerd[1718]: time="2026-03-04T00:49:43.052727637Z" level=info msg="StartContainer for \"5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b\" returns successfully" Mar 4 00:49:43.058200 kubelet[3200]: I0304 00:49:43.057388 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:49:43.058702 systemd[1]: cri-containerd-5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b.scope: Deactivated successfully. Mar 4 00:49:43.081822 kubelet[3200]: I0304 00:49:43.081765 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7857cc676d-d8ll2" podStartSLOduration=2.260467219 podStartE2EDuration="4.081750665s" podCreationTimestamp="2026-03-04 00:49:39 +0000 UTC" firstStartedPulling="2026-03-04 00:49:39.821180895 +0000 UTC m=+21.074246199" lastFinishedPulling="2026-03-04 00:49:41.642464341 +0000 UTC m=+22.895529645" observedRunningTime="2026-03-04 00:49:42.077252353 +0000 UTC m=+23.330317657" watchObservedRunningTime="2026-03-04 00:49:43.081750665 +0000 UTC m=+24.334815969" Mar 4 00:49:43.964698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b-rootfs.mount: Deactivated successfully. Mar 4 00:49:44.211655 containerd[1718]: time="2026-03-04T00:49:44.211600055Z" level=info msg="shim disconnected" id=5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b namespace=k8s.io Mar 4 00:49:44.212234 containerd[1718]: time="2026-03-04T00:49:44.212058336Z" level=warning msg="cleaning up after shim disconnected" id=5e61ba090a21b2217b8fbb94726cf6adaef5fa252867909d2ba97752f7e6142b namespace=k8s.io Mar 4 00:49:44.212234 containerd[1718]: time="2026-03-04T00:49:44.212075696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:49:44.944573 kubelet[3200]: E0304 00:49:44.942295 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:45.065234 containerd[1718]: time="2026-03-04T00:49:45.065153940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 00:49:46.942928 kubelet[3200]: E0304 00:49:46.942725 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:48.946106 kubelet[3200]: E0304 00:49:48.946070 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:49.316325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358028794.mount: Deactivated successfully. Mar 4 00:49:49.715052 containerd[1718]: time="2026-03-04T00:49:49.714995548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:49.718562 containerd[1718]: time="2026-03-04T00:49:49.718529672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 4 00:49:49.722910 containerd[1718]: time="2026-03-04T00:49:49.722884398Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:49.731896 containerd[1718]: time="2026-03-04T00:49:49.731729449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:49.732772 containerd[1718]: time="2026-03-04T00:49:49.732262529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.667048429s" Mar 4 00:49:49.732772 containerd[1718]: time="2026-03-04T00:49:49.732292409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 4 00:49:49.743743 containerd[1718]: time="2026-03-04T00:49:49.743710944Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 00:49:49.790307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729169553.mount: Deactivated successfully. Mar 4 00:49:49.811104 containerd[1718]: time="2026-03-04T00:49:49.811056948Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46\"" Mar 4 00:49:49.811964 containerd[1718]: time="2026-03-04T00:49:49.811865469Z" level=info msg="StartContainer for \"6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46\"" Mar 4 00:49:49.839765 systemd[1]: Started cri-containerd-6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46.scope - libcontainer container 6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46. Mar 4 00:49:49.870113 containerd[1718]: time="2026-03-04T00:49:49.870068983Z" level=info msg="StartContainer for \"6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46\" returns successfully" Mar 4 00:49:49.903300 systemd[1]: cri-containerd-6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46.scope: Deactivated successfully. Mar 4 00:49:50.316691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46-rootfs.mount: Deactivated successfully. Mar 4 00:49:50.943505 kubelet[3200]: E0304 00:49:50.942998 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:51.147167 containerd[1718]: time="2026-03-04T00:49:51.146970973Z" level=info msg="shim disconnected" id=6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46 namespace=k8s.io Mar 4 00:49:51.147167 containerd[1718]: time="2026-03-04T00:49:51.147020893Z" level=warning msg="cleaning up after shim disconnected" id=6decda9428495801765a4da9ca94298d3d580d3f8b12a9ab7743cd3a25825f46 namespace=k8s.io Mar 4 00:49:51.147167 containerd[1718]: time="2026-03-04T00:49:51.147030693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:49:52.081724 containerd[1718]: time="2026-03-04T00:49:52.081668635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 00:49:52.942349 kubelet[3200]: E0304 00:49:52.942163 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:54.427373 containerd[1718]: time="2026-03-04T00:49:54.427332512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:54.430777 containerd[1718]: time="2026-03-04T00:49:54.430586075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 4 00:49:54.434109 containerd[1718]: time="2026-03-04T00:49:54.434086478Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:54.439725 containerd[1718]: time="2026-03-04T00:49:54.439456003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:49:54.440216 containerd[1718]: time="2026-03-04T00:49:54.440188084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.358463529s" Mar 4 00:49:54.440272 containerd[1718]: time="2026-03-04T00:49:54.440215564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 4 00:49:54.448386 containerd[1718]: time="2026-03-04T00:49:54.448358411Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 00:49:54.495499 containerd[1718]: time="2026-03-04T00:49:54.495458053Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83\"" Mar 4 00:49:54.497601 containerd[1718]: time="2026-03-04T00:49:54.496093694Z" level=info msg="StartContainer for \"29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83\"" Mar 4 00:49:54.525772 systemd[1]: Started cri-containerd-29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83.scope - libcontainer container 29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83. Mar 4 00:49:54.553806 containerd[1718]: time="2026-03-04T00:49:54.553763186Z" level=info msg="StartContainer for \"29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83\" returns successfully" Mar 4 00:49:54.942896 kubelet[3200]: E0304 00:49:54.942852 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:56.361480 systemd[1]: cri-containerd-29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83.scope: Deactivated successfully. Mar 4 00:49:56.385296 kubelet[3200]: I0304 00:49:56.384672 3200 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 4 00:49:56.390911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83-rootfs.mount: Deactivated successfully. Mar 4 00:49:56.743744 systemd[1]: Created slice kubepods-burstable-pod42250f5b_0fe9_4ffb_8a66_17380f81c557.slice - libcontainer container kubepods-burstable-pod42250f5b_0fe9_4ffb_8a66_17380f81c557.slice. Mar 4 00:49:56.752527 containerd[1718]: time="2026-03-04T00:49:56.751353318Z" level=info msg="shim disconnected" id=29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83 namespace=k8s.io Mar 4 00:49:56.752527 containerd[1718]: time="2026-03-04T00:49:56.751412798Z" level=warning msg="cleaning up after shim disconnected" id=29364a3255b11bedbd25304a9a40f1cb1832a99d6129245ac43699cbd7412b83 namespace=k8s.io Mar 4 00:49:56.752527 containerd[1718]: time="2026-03-04T00:49:56.751424878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:49:56.752317 systemd[1]: Created slice kubepods-besteffort-pode6f4eb03_2699_45e9_b386_310dd3f95ccd.slice - libcontainer container kubepods-besteffort-pode6f4eb03_2699_45e9_b386_310dd3f95ccd.slice. Mar 4 00:49:56.774070 systemd[1]: Created slice kubepods-burstable-pod51231700_2a49_4663_85fb_fd5115fc08f4.slice - libcontainer container kubepods-burstable-pod51231700_2a49_4663_85fb_fd5115fc08f4.slice. Mar 4 00:49:56.781768 containerd[1718]: time="2026-03-04T00:49:56.780075904Z" level=warning msg="cleanup warnings time=\"2026-03-04T00:49:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 00:49:56.787009 systemd[1]: Created slice kubepods-besteffort-podcc646cc0_85f1_4307_a888_4c768b143ff9.slice - libcontainer container kubepods-besteffort-podcc646cc0_85f1_4307_a888_4c768b143ff9.slice. Mar 4 00:49:56.793177 systemd[1]: Created slice kubepods-besteffort-poda9862f74_c05c_453b_b7c5_c17945e70f61.slice - libcontainer container kubepods-besteffort-poda9862f74_c05c_453b_b7c5_c17945e70f61.slice. Mar 4 00:49:56.802439 systemd[1]: Created slice kubepods-besteffort-pod31c90154_2140_4ee5_b886_dc4e9943430b.slice - libcontainer container kubepods-besteffort-pod31c90154_2140_4ee5_b886_dc4e9943430b.slice. Mar 4 00:49:56.807255 systemd[1]: Created slice kubepods-besteffort-pod0012b3a2_8c12_4b79_8f51_74e600877b09.slice - libcontainer container kubepods-besteffort-pod0012b3a2_8c12_4b79_8f51_74e600877b09.slice. Mar 4 00:49:56.850661 kubelet[3200]: I0304 00:49:56.850609 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-ca-bundle\") pod \"whisker-6467f5f748-24g99\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:56.850835 kubelet[3200]: I0304 00:49:56.850703 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rgkc\" (UniqueName: \"kubernetes.io/projected/51231700-2a49-4663-85fb-fd5115fc08f4-kube-api-access-8rgkc\") pod \"coredns-66bc5c9577-mxljm\" (UID: \"51231700-2a49-4663-85fb-fd5115fc08f4\") " pod="kube-system/coredns-66bc5c9577-mxljm" Mar 4 00:49:56.850835 kubelet[3200]: I0304 00:49:56.850725 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6p4j\" (UniqueName: \"kubernetes.io/projected/cc646cc0-85f1-4307-a888-4c768b143ff9-kube-api-access-m6p4j\") pod \"whisker-6467f5f748-24g99\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:56.850835 kubelet[3200]: I0304 00:49:56.850741 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31c90154-2140-4ee5-b886-dc4e9943430b-calico-apiserver-certs\") pod \"calico-apiserver-58c464678b-tr8bm\" (UID: \"31c90154-2140-4ee5-b886-dc4e9943430b\") " pod="calico-system/calico-apiserver-58c464678b-tr8bm" Mar 4 00:49:56.850835 kubelet[3200]: I0304 00:49:56.850787 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a9862f74-c05c-453b-b7c5-c17945e70f61-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-2lgfs\" (UID: \"a9862f74-c05c-453b-b7c5-c17945e70f61\") " pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:56.850835 kubelet[3200]: I0304 00:49:56.850809 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42250f5b-0fe9-4ffb-8a66-17380f81c557-config-volume\") pod \"coredns-66bc5c9577-6znkz\" (UID: \"42250f5b-0fe9-4ffb-8a66-17380f81c557\") " pod="kube-system/coredns-66bc5c9577-6znkz" Mar 4 00:49:56.850951 kubelet[3200]: I0304 00:49:56.850874 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-nginx-config\") pod \"whisker-6467f5f748-24g99\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:56.850951 kubelet[3200]: I0304 00:49:56.850891 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-backend-key-pair\") pod \"whisker-6467f5f748-24g99\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:56.850951 kubelet[3200]: I0304 00:49:56.850907 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0012b3a2-8c12-4b79-8f51-74e600877b09-calico-apiserver-certs\") pod \"calico-apiserver-58c464678b-zm8w9\" (UID: \"0012b3a2-8c12-4b79-8f51-74e600877b09\") " pod="calico-system/calico-apiserver-58c464678b-zm8w9" Mar 4 00:49:56.850951 kubelet[3200]: I0304 00:49:56.850931 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqhz\" (UniqueName: \"kubernetes.io/projected/e6f4eb03-2699-45e9-b386-310dd3f95ccd-kube-api-access-lbqhz\") pod \"calico-kube-controllers-56b5c6bcf-xvchl\" (UID: \"e6f4eb03-2699-45e9-b386-310dd3f95ccd\") " pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" Mar 4 00:49:56.851067 kubelet[3200]: I0304 00:49:56.850954 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptr8h\" (UniqueName: \"kubernetes.io/projected/0012b3a2-8c12-4b79-8f51-74e600877b09-kube-api-access-ptr8h\") pod \"calico-apiserver-58c464678b-zm8w9\" (UID: \"0012b3a2-8c12-4b79-8f51-74e600877b09\") " pod="calico-system/calico-apiserver-58c464678b-zm8w9" Mar 4 00:49:56.851067 kubelet[3200]: I0304 00:49:56.850969 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51231700-2a49-4663-85fb-fd5115fc08f4-config-volume\") pod \"coredns-66bc5c9577-mxljm\" (UID: \"51231700-2a49-4663-85fb-fd5115fc08f4\") " pod="kube-system/coredns-66bc5c9577-mxljm" Mar 4 00:49:56.851067 kubelet[3200]: I0304 00:49:56.851043 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvgr\" (UniqueName: \"kubernetes.io/projected/31c90154-2140-4ee5-b886-dc4e9943430b-kube-api-access-bcvgr\") pod \"calico-apiserver-58c464678b-tr8bm\" (UID: \"31c90154-2140-4ee5-b886-dc4e9943430b\") " pod="calico-system/calico-apiserver-58c464678b-tr8bm" Mar 4 00:49:56.851148 kubelet[3200]: I0304 00:49:56.851059 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsh4z\" (UniqueName: \"kubernetes.io/projected/42250f5b-0fe9-4ffb-8a66-17380f81c557-kube-api-access-xsh4z\") pod \"coredns-66bc5c9577-6znkz\" (UID: \"42250f5b-0fe9-4ffb-8a66-17380f81c557\") " pod="kube-system/coredns-66bc5c9577-6znkz" Mar 4 00:49:56.851148 kubelet[3200]: I0304 00:49:56.851111 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6f4eb03-2699-45e9-b386-310dd3f95ccd-tigera-ca-bundle\") pod \"calico-kube-controllers-56b5c6bcf-xvchl\" (UID: \"e6f4eb03-2699-45e9-b386-310dd3f95ccd\") " pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" Mar 4 00:49:56.851148 kubelet[3200]: I0304 00:49:56.851126 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9862f74-c05c-453b-b7c5-c17945e70f61-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-2lgfs\" (UID: \"a9862f74-c05c-453b-b7c5-c17945e70f61\") " pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:56.851214 kubelet[3200]: I0304 00:49:56.851170 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9862f74-c05c-453b-b7c5-c17945e70f61-config\") pod \"goldmane-cccfbd5cf-2lgfs\" (UID: \"a9862f74-c05c-453b-b7c5-c17945e70f61\") " pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:56.851214 kubelet[3200]: I0304 00:49:56.851185 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6sv\" (UniqueName: \"kubernetes.io/projected/a9862f74-c05c-453b-b7c5-c17945e70f61-kube-api-access-6b6sv\") pod \"goldmane-cccfbd5cf-2lgfs\" (UID: \"a9862f74-c05c-453b-b7c5-c17945e70f61\") " pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:56.949437 systemd[1]: Created slice kubepods-besteffort-pode7af50d6_af42_44be_9485_418ddf8e697a.slice - libcontainer container kubepods-besteffort-pode7af50d6_af42_44be_9485_418ddf8e697a.slice. Mar 4 00:49:56.974600 containerd[1718]: time="2026-03-04T00:49:56.970262554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c52z8,Uid:e7af50d6-af42-44be-9485-418ddf8e697a,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.054101 containerd[1718]: time="2026-03-04T00:49:57.053629189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6znkz,Uid:42250f5b-0fe9-4ffb-8a66-17380f81c557,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:57.061717 containerd[1718]: time="2026-03-04T00:49:57.061437876Z" level=error msg="Failed to destroy network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.062026 containerd[1718]: time="2026-03-04T00:49:57.061958637Z" level=error msg="encountered an error cleaning up failed sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.062150 containerd[1718]: time="2026-03-04T00:49:57.062091037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c52z8,Uid:e7af50d6-af42-44be-9485-418ddf8e697a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.063041 kubelet[3200]: E0304 00:49:57.062883 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.063583 kubelet[3200]: E0304 00:49:57.063227 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:57.063583 kubelet[3200]: E0304 00:49:57.063273 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c52z8" Mar 4 00:49:57.063583 kubelet[3200]: E0304 00:49:57.063326 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c52z8_calico-system(e7af50d6-af42-44be-9485-418ddf8e697a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c52z8_calico-system(e7af50d6-af42-44be-9485-418ddf8e697a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:57.069595 containerd[1718]: time="2026-03-04T00:49:57.069320043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56b5c6bcf-xvchl,Uid:e6f4eb03-2699-45e9-b386-310dd3f95ccd,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.088263 containerd[1718]: time="2026-03-04T00:49:57.087944420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mxljm,Uid:51231700-2a49-4663-85fb-fd5115fc08f4,Namespace:kube-system,Attempt:0,}" Mar 4 00:49:57.093586 kubelet[3200]: I0304 00:49:57.093095 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:49:57.095079 containerd[1718]: time="2026-03-04T00:49:57.094820986Z" level=info msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" Mar 4 00:49:57.095079 containerd[1718]: time="2026-03-04T00:49:57.094972106Z" level=info msg="Ensure that sandbox 928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3 in task-service has been cleanup successfully" Mar 4 00:49:57.098007 containerd[1718]: time="2026-03-04T00:49:57.097979429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6467f5f748-24g99,Uid:cc646cc0-85f1-4307-a888-4c768b143ff9,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.110714 containerd[1718]: time="2026-03-04T00:49:57.110665920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2lgfs,Uid:a9862f74-c05c-453b-b7c5-c17945e70f61,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.119782 containerd[1718]: time="2026-03-04T00:49:57.119089688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-tr8bm,Uid:31c90154-2140-4ee5-b886-dc4e9943430b,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.128859 containerd[1718]: time="2026-03-04T00:49:57.128331736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-zm8w9,Uid:0012b3a2-8c12-4b79-8f51-74e600877b09,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:57.141201 containerd[1718]: time="2026-03-04T00:49:57.141161508Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 00:49:57.162071 containerd[1718]: time="2026-03-04T00:49:57.162019926Z" level=error msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" failed" error="failed to destroy network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.163425 kubelet[3200]: E0304 00:49:57.163373 3200 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:49:57.163485 kubelet[3200]: E0304 00:49:57.163445 3200 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3"} Mar 4 00:49:57.164493 kubelet[3200]: E0304 00:49:57.163506 3200 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7af50d6-af42-44be-9485-418ddf8e697a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 00:49:57.164493 kubelet[3200]: E0304 00:49:57.163531 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7af50d6-af42-44be-9485-418ddf8e697a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c52z8" podUID="e7af50d6-af42-44be-9485-418ddf8e697a" Mar 4 00:49:57.224343 containerd[1718]: time="2026-03-04T00:49:57.224280062Z" level=error msg="Failed to destroy network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.224842 containerd[1718]: time="2026-03-04T00:49:57.224664463Z" level=error msg="encountered an error cleaning up failed sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.224842 containerd[1718]: time="2026-03-04T00:49:57.224731143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56b5c6bcf-xvchl,Uid:e6f4eb03-2699-45e9-b386-310dd3f95ccd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.225585 kubelet[3200]: E0304 00:49:57.224975 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.225585 kubelet[3200]: E0304 00:49:57.225031 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" Mar 4 00:49:57.225585 kubelet[3200]: E0304 00:49:57.225050 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" Mar 4 00:49:57.225722 kubelet[3200]: E0304 00:49:57.225099 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56b5c6bcf-xvchl_calico-system(e6f4eb03-2699-45e9-b386-310dd3f95ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56b5c6bcf-xvchl_calico-system(e6f4eb03-2699-45e9-b386-310dd3f95ccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" podUID="e6f4eb03-2699-45e9-b386-310dd3f95ccd" Mar 4 00:49:57.241826 containerd[1718]: time="2026-03-04T00:49:57.241777678Z" level=error msg="Failed to destroy network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.242146 containerd[1718]: time="2026-03-04T00:49:57.242094318Z" level=error msg="encountered an error cleaning up failed sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.242184 containerd[1718]: time="2026-03-04T00:49:57.242168478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6znkz,Uid:42250f5b-0fe9-4ffb-8a66-17380f81c557,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.242398 kubelet[3200]: E0304 00:49:57.242369 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.242832 kubelet[3200]: E0304 00:49:57.242495 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6znkz" Mar 4 00:49:57.242832 kubelet[3200]: E0304 00:49:57.242521 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6znkz" Mar 4 00:49:57.242832 kubelet[3200]: E0304 00:49:57.242600 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6znkz_kube-system(42250f5b-0fe9-4ffb-8a66-17380f81c557)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6znkz_kube-system(42250f5b-0fe9-4ffb-8a66-17380f81c557)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6znkz" podUID="42250f5b-0fe9-4ffb-8a66-17380f81c557" Mar 4 00:49:57.357370 containerd[1718]: time="2026-03-04T00:49:57.355941980Z" level=info msg="CreateContainer within sandbox \"c532f6ec0860c2c208da17b705a7d9d975c055385af420485d6361af6ae65987\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1\"" Mar 4 00:49:57.359080 containerd[1718]: time="2026-03-04T00:49:57.357644422Z" level=info msg="StartContainer for \"00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1\"" Mar 4 00:49:57.450721 containerd[1718]: time="2026-03-04T00:49:57.450672786Z" level=error msg="Failed to destroy network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.454479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6-shm.mount: Deactivated successfully. Mar 4 00:49:57.454783 containerd[1718]: time="2026-03-04T00:49:57.454687109Z" level=error msg="encountered an error cleaning up failed sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.454783 containerd[1718]: time="2026-03-04T00:49:57.454744349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mxljm,Uid:51231700-2a49-4663-85fb-fd5115fc08f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.457668 kubelet[3200]: E0304 00:49:57.456806 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.457668 kubelet[3200]: E0304 00:49:57.456869 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mxljm" Mar 4 00:49:57.457668 kubelet[3200]: E0304 00:49:57.456887 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mxljm" Mar 4 00:49:57.458018 kubelet[3200]: E0304 00:49:57.456943 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mxljm_kube-system(51231700-2a49-4663-85fb-fd5115fc08f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mxljm_kube-system(51231700-2a49-4663-85fb-fd5115fc08f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mxljm" podUID="51231700-2a49-4663-85fb-fd5115fc08f4" Mar 4 00:49:57.471854 systemd[1]: run-containerd-runc-k8s.io-00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1-runc.OcgwRz.mount: Deactivated successfully. Mar 4 00:49:57.478674 containerd[1718]: time="2026-03-04T00:49:57.478627051Z" level=error msg="Failed to destroy network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.478999 containerd[1718]: time="2026-03-04T00:49:57.478974771Z" level=error msg="encountered an error cleaning up failed sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.479042 containerd[1718]: time="2026-03-04T00:49:57.479027731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2lgfs,Uid:a9862f74-c05c-453b-b7c5-c17945e70f61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.479632 kubelet[3200]: E0304 00:49:57.479233 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.479632 kubelet[3200]: E0304 00:49:57.479298 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:57.479632 kubelet[3200]: E0304 00:49:57.479315 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-2lgfs" Mar 4 00:49:57.479837 kubelet[3200]: E0304 00:49:57.479360 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-2lgfs_calico-system(a9862f74-c05c-453b-b7c5-c17945e70f61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-2lgfs_calico-system(a9862f74-c05c-453b-b7c5-c17945e70f61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-2lgfs" podUID="a9862f74-c05c-453b-b7c5-c17945e70f61" Mar 4 00:49:57.486758 systemd[1]: Started cri-containerd-00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1.scope - libcontainer container 00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1. Mar 4 00:49:57.519344 containerd[1718]: time="2026-03-04T00:49:57.519306127Z" level=error msg="Failed to destroy network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.520501 containerd[1718]: time="2026-03-04T00:49:57.520469848Z" level=error msg="encountered an error cleaning up failed sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.520676 containerd[1718]: time="2026-03-04T00:49:57.520651968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6467f5f748-24g99,Uid:cc646cc0-85f1-4307-a888-4c768b143ff9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.521324 kubelet[3200]: E0304 00:49:57.520993 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.521324 kubelet[3200]: E0304 00:49:57.521041 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:57.521324 kubelet[3200]: E0304 00:49:57.521058 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6467f5f748-24g99" Mar 4 00:49:57.521495 kubelet[3200]: E0304 00:49:57.521101 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6467f5f748-24g99_calico-system(cc646cc0-85f1-4307-a888-4c768b143ff9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6467f5f748-24g99_calico-system(cc646cc0-85f1-4307-a888-4c768b143ff9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6467f5f748-24g99" podUID="cc646cc0-85f1-4307-a888-4c768b143ff9" Mar 4 00:49:57.544732 containerd[1718]: time="2026-03-04T00:49:57.544487350Z" level=error msg="Failed to destroy network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.544732 containerd[1718]: time="2026-03-04T00:49:57.544640910Z" level=info msg="StartContainer for \"00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1\" returns successfully" Mar 4 00:49:57.546373 containerd[1718]: time="2026-03-04T00:49:57.545719831Z" level=error msg="encountered an error cleaning up failed sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.546373 containerd[1718]: time="2026-03-04T00:49:57.545791631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-zm8w9,Uid:0012b3a2-8c12-4b79-8f51-74e600877b09,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.546962 kubelet[3200]: E0304 00:49:57.546731 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.546962 kubelet[3200]: E0304 00:49:57.546776 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-58c464678b-zm8w9" Mar 4 00:49:57.546962 kubelet[3200]: E0304 00:49:57.546841 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-58c464678b-zm8w9" Mar 4 00:49:57.547083 kubelet[3200]: E0304 00:49:57.546923 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c464678b-zm8w9_calico-system(0012b3a2-8c12-4b79-8f51-74e600877b09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c464678b-zm8w9_calico-system(0012b3a2-8c12-4b79-8f51-74e600877b09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-58c464678b-zm8w9" podUID="0012b3a2-8c12-4b79-8f51-74e600877b09" Mar 4 00:49:57.550739 containerd[1718]: time="2026-03-04T00:49:57.550169435Z" level=error msg="Failed to destroy network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.551741 containerd[1718]: time="2026-03-04T00:49:57.551640556Z" level=error msg="encountered an error cleaning up failed sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.551910 containerd[1718]: time="2026-03-04T00:49:57.551709716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-tr8bm,Uid:31c90154-2140-4ee5-b886-dc4e9943430b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.552298 kubelet[3200]: E0304 00:49:57.552191 3200 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:49:57.552298 kubelet[3200]: E0304 00:49:57.552263 3200 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-58c464678b-tr8bm" Mar 4 00:49:57.552509 kubelet[3200]: E0304 00:49:57.552279 3200 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-58c464678b-tr8bm" Mar 4 00:49:57.552742 kubelet[3200]: E0304 00:49:57.552637 3200 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c464678b-tr8bm_calico-system(31c90154-2140-4ee5-b886-dc4e9943430b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c464678b-tr8bm_calico-system(31c90154-2140-4ee5-b886-dc4e9943430b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-58c464678b-tr8bm" podUID="31c90154-2140-4ee5-b886-dc4e9943430b" Mar 4 00:49:58.109365 kubelet[3200]: I0304 00:49:58.108095 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:49:58.109501 containerd[1718]: time="2026-03-04T00:49:58.108726056Z" level=info msg="StopPodSandbox for \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\"" Mar 4 00:49:58.109501 containerd[1718]: time="2026-03-04T00:49:58.109084576Z" level=info msg="Ensure that sandbox 293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19 in task-service has been cleanup successfully" Mar 4 00:49:58.111623 kubelet[3200]: I0304 00:49:58.111597 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:49:58.112935 containerd[1718]: time="2026-03-04T00:49:58.112624820Z" level=info msg="StopPodSandbox for \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\"" Mar 4 00:49:58.112935 containerd[1718]: time="2026-03-04T00:49:58.112761420Z" level=info msg="Ensure that sandbox 2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6 in task-service has been cleanup successfully" Mar 4 00:49:58.114031 kubelet[3200]: I0304 00:49:58.113703 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:49:58.114398 containerd[1718]: time="2026-03-04T00:49:58.114366021Z" level=info msg="StopPodSandbox for \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\"" Mar 4 00:49:58.115265 containerd[1718]: time="2026-03-04T00:49:58.114762341Z" level=info msg="Ensure that sandbox bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15 in task-service has been cleanup successfully" Mar 4 00:49:58.120498 kubelet[3200]: I0304 00:49:58.118668 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:49:58.120829 containerd[1718]: time="2026-03-04T00:49:58.120789067Z" level=info msg="StopPodSandbox for \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\"" Mar 4 00:49:58.121052 containerd[1718]: time="2026-03-04T00:49:58.121025907Z" level=info msg="Ensure that sandbox 406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0 in task-service has been cleanup successfully" Mar 4 00:49:58.140898 kubelet[3200]: I0304 00:49:58.140870 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:49:58.142996 containerd[1718]: time="2026-03-04T00:49:58.142963527Z" level=info msg="StopPodSandbox for \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\"" Mar 4 00:49:58.143274 containerd[1718]: time="2026-03-04T00:49:58.143252007Z" level=info msg="Ensure that sandbox d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8 in task-service has been cleanup successfully" Mar 4 00:49:58.150723 kubelet[3200]: I0304 00:49:58.150701 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:49:58.151681 containerd[1718]: time="2026-03-04T00:49:58.151646895Z" level=info msg="StopPodSandbox for \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\"" Mar 4 00:49:58.152111 kubelet[3200]: I0304 00:49:58.152082 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:49:58.152500 containerd[1718]: time="2026-03-04T00:49:58.152327215Z" level=info msg="StopPodSandbox for \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\"" Mar 4 00:49:58.152500 containerd[1718]: time="2026-03-04T00:49:58.152458495Z" level=info msg="Ensure that sandbox 6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701 in task-service has been cleanup successfully" Mar 4 00:49:58.154845 containerd[1718]: time="2026-03-04T00:49:58.154639337Z" level=info msg="Ensure that sandbox ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389 in task-service has been cleanup successfully" Mar 4 00:49:58.194185 kubelet[3200]: I0304 00:49:58.193342 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hf4ln" podStartSLOduration=4.657025062 podStartE2EDuration="19.193323332s" podCreationTimestamp="2026-03-04 00:49:39 +0000 UTC" firstStartedPulling="2026-03-04 00:49:39.904841815 +0000 UTC m=+21.157907119" lastFinishedPulling="2026-03-04 00:49:54.441140085 +0000 UTC m=+35.694205389" observedRunningTime="2026-03-04 00:49:58.169502431 +0000 UTC m=+39.422567735" watchObservedRunningTime="2026-03-04 00:49:58.193323332 +0000 UTC m=+39.446388636" Mar 4 00:49:58.391097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19-shm.mount: Deactivated successfully. Mar 4 00:49:58.391203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389-shm.mount: Deactivated successfully. Mar 4 00:49:58.391252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8-shm.mount: Deactivated successfully. Mar 4 00:49:58.391306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701-shm.mount: Deactivated successfully. Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.346 [INFO][4413] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.347 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" iface="eth0" netns="/var/run/netns/cni-c0110cd8-e704-796e-6690-5424918ffb7a" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.347 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" iface="eth0" netns="/var/run/netns/cni-c0110cd8-e704-796e-6690-5424918ffb7a" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.348 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" iface="eth0" netns="/var/run/netns/cni-c0110cd8-e704-796e-6690-5424918ffb7a" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.348 [INFO][4413] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.348 [INFO][4413] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.447 [INFO][4461] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.449 [INFO][4461] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.450 [INFO][4461] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.474 [WARNING][4461] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.474 [INFO][4461] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.476 [INFO][4461] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.500809 containerd[1718]: 2026-03-04 00:49:58.482 [INFO][4413] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:49:58.500809 containerd[1718]: time="2026-03-04T00:49:58.500655128Z" level=info msg="TearDown network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" successfully" Mar 4 00:49:58.500809 containerd[1718]: time="2026-03-04T00:49:58.500689248Z" level=info msg="StopPodSandbox for \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" returns successfully" Mar 4 00:49:58.503231 systemd[1]: run-netns-cni\x2dc0110cd8\x2de704\x2d796e\x2d6690\x2d5424918ffb7a.mount: Deactivated successfully. Mar 4 00:49:58.517609 containerd[1718]: time="2026-03-04T00:49:58.517149063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-tr8bm,Uid:31c90154-2140-4ee5-b886-dc4e9943430b,Namespace:calico-system,Attempt:1,}" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.329 [INFO][4408] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.329 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" iface="eth0" netns="/var/run/netns/cni-ec69fd98-d5fc-fd58-5f68-2db6287a1095" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.330 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" iface="eth0" netns="/var/run/netns/cni-ec69fd98-d5fc-fd58-5f68-2db6287a1095" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.331 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" iface="eth0" netns="/var/run/netns/cni-ec69fd98-d5fc-fd58-5f68-2db6287a1095" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.331 [INFO][4408] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.331 [INFO][4408] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.450 [INFO][4456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.452 [INFO][4456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.476 [INFO][4456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.496 [WARNING][4456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.496 [INFO][4456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.505 [INFO][4456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.520893 containerd[1718]: 2026-03-04 00:49:58.511 [INFO][4408] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:49:58.525515 containerd[1718]: time="2026-03-04T00:49:58.522706468Z" level=info msg="TearDown network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" successfully" Mar 4 00:49:58.525515 containerd[1718]: time="2026-03-04T00:49:58.522734588Z" level=info msg="StopPodSandbox for \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" returns successfully" Mar 4 00:49:58.524146 systemd[1]: run-netns-cni\x2dec69fd98\x2dd5fc\x2dfd58\x2d5f68\x2d2db6287a1095.mount: Deactivated successfully. Mar 4 00:49:58.530331 containerd[1718]: time="2026-03-04T00:49:58.530300394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2lgfs,Uid:a9862f74-c05c-453b-b7c5-c17945e70f61,Namespace:calico-system,Attempt:1,}" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.311 [INFO][4370] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.311 [INFO][4370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" iface="eth0" netns="/var/run/netns/cni-459ae802-d300-3c93-6cb2-bf20286633ff" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.311 [INFO][4370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" iface="eth0" netns="/var/run/netns/cni-459ae802-d300-3c93-6cb2-bf20286633ff" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.314 [INFO][4370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" iface="eth0" netns="/var/run/netns/cni-459ae802-d300-3c93-6cb2-bf20286633ff" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.314 [INFO][4370] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.314 [INFO][4370] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.489 [INFO][4448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.493 [INFO][4448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.505 [INFO][4448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.537 [WARNING][4448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.537 [INFO][4448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.548 [INFO][4448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.554503 containerd[1718]: 2026-03-04 00:49:58.551 [INFO][4370] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:49:58.555097 containerd[1718]: time="2026-03-04T00:49:58.555071057Z" level=info msg="TearDown network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" successfully" Mar 4 00:49:58.555167 containerd[1718]: time="2026-03-04T00:49:58.555153577Z" level=info msg="StopPodSandbox for \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" returns successfully" Mar 4 00:49:58.558736 systemd[1]: run-netns-cni\x2d459ae802\x2dd300\x2d3c93\x2d6cb2\x2dbf20286633ff.mount: Deactivated successfully. Mar 4 00:49:58.565778 containerd[1718]: time="2026-03-04T00:49:58.565491746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mxljm,Uid:51231700-2a49-4663-85fb-fd5115fc08f4,Namespace:kube-system,Attempt:1,}" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.293 [INFO][4354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.296 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" iface="eth0" netns="/var/run/netns/cni-646302fc-8e9e-96ea-225d-caac9e4fb8e0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.303 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" iface="eth0" netns="/var/run/netns/cni-646302fc-8e9e-96ea-225d-caac9e4fb8e0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.303 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" iface="eth0" netns="/var/run/netns/cni-646302fc-8e9e-96ea-225d-caac9e4fb8e0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.303 [INFO][4354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.303 [INFO][4354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.509 [INFO][4437] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.510 [INFO][4437] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.548 [INFO][4437] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.565 [WARNING][4437] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.565 [INFO][4437] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.566 [INFO][4437] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.571628 containerd[1718]: 2026-03-04 00:49:58.569 [INFO][4354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:49:58.571979 containerd[1718]: time="2026-03-04T00:49:58.571742192Z" level=info msg="TearDown network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" successfully" Mar 4 00:49:58.571979 containerd[1718]: time="2026-03-04T00:49:58.571764592Z" level=info msg="StopPodSandbox for \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" returns successfully" Mar 4 00:49:58.575062 systemd[1]: run-netns-cni\x2d646302fc\x2d8e9e\x2d96ea\x2d225d\x2dcaac9e4fb8e0.mount: Deactivated successfully. Mar 4 00:49:58.578769 containerd[1718]: time="2026-03-04T00:49:58.578735398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56b5c6bcf-xvchl,Uid:e6f4eb03-2699-45e9-b386-310dd3f95ccd,Namespace:calico-system,Attempt:1,}" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.309 [INFO][4355] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.309 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" iface="eth0" netns="/var/run/netns/cni-9343ec38-8e0c-b993-2b3b-5ff08dcb4612" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.310 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" iface="eth0" netns="/var/run/netns/cni-9343ec38-8e0c-b993-2b3b-5ff08dcb4612" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.310 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" iface="eth0" netns="/var/run/netns/cni-9343ec38-8e0c-b993-2b3b-5ff08dcb4612" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.310 [INFO][4355] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.310 [INFO][4355] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.509 [INFO][4442] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.510 [INFO][4442] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.567 [INFO][4442] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.580 [WARNING][4442] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.580 [INFO][4442] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.581 [INFO][4442] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.586889 containerd[1718]: 2026-03-04 00:49:58.583 [INFO][4355] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:49:58.587256 containerd[1718]: time="2026-03-04T00:49:58.587047725Z" level=info msg="TearDown network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" successfully" Mar 4 00:49:58.587256 containerd[1718]: time="2026-03-04T00:49:58.587070285Z" level=info msg="StopPodSandbox for \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" returns successfully" Mar 4 00:49:58.593341 containerd[1718]: time="2026-03-04T00:49:58.593307611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-zm8w9,Uid:0012b3a2-8c12-4b79-8f51-74e600877b09,Namespace:calico-system,Attempt:1,}" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.299 [INFO][4377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.300 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" iface="eth0" netns="/var/run/netns/cni-2e3dc269-b8c1-7be5-1e8a-1a9c4f0c5308" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.301 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" iface="eth0" netns="/var/run/netns/cni-2e3dc269-b8c1-7be5-1e8a-1a9c4f0c5308" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.301 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" iface="eth0" netns="/var/run/netns/cni-2e3dc269-b8c1-7be5-1e8a-1a9c4f0c5308" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.302 [INFO][4377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.302 [INFO][4377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.519 [INFO][4435] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.519 [INFO][4435] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.581 [INFO][4435] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.592 [WARNING][4435] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.592 [INFO][4435] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.594 [INFO][4435] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.598512 containerd[1718]: 2026-03-04 00:49:58.596 [INFO][4377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:49:58.599914 containerd[1718]: time="2026-03-04T00:49:58.599794457Z" level=info msg="TearDown network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" successfully" Mar 4 00:49:58.600077 containerd[1718]: time="2026-03-04T00:49:58.599968377Z" level=info msg="StopPodSandbox for \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" returns successfully" Mar 4 00:49:58.607264 containerd[1718]: time="2026-03-04T00:49:58.607185183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6znkz,Uid:42250f5b-0fe9-4ffb-8a66-17380f81c557,Namespace:kube-system,Attempt:1,}" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.370 [INFO][4412] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.371 [INFO][4412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" iface="eth0" netns="/var/run/netns/cni-679c511a-05b8-cd92-80d3-4e09ad9769a1" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.372 [INFO][4412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" iface="eth0" netns="/var/run/netns/cni-679c511a-05b8-cd92-80d3-4e09ad9769a1" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.372 [INFO][4412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" iface="eth0" netns="/var/run/netns/cni-679c511a-05b8-cd92-80d3-4e09ad9769a1" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.372 [INFO][4412] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.372 [INFO][4412] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.531 [INFO][4466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.531 [INFO][4466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.594 [INFO][4466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.604 [WARNING][4466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.605 [INFO][4466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.606 [INFO][4466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.609711 containerd[1718]: 2026-03-04 00:49:58.608 [INFO][4412] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:49:58.610194 containerd[1718]: time="2026-03-04T00:49:58.609809506Z" level=info msg="TearDown network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" successfully" Mar 4 00:49:58.610194 containerd[1718]: time="2026-03-04T00:49:58.609827306Z" level=info msg="StopPodSandbox for \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" returns successfully" Mar 4 00:49:58.667054 kubelet[3200]: I0304 00:49:58.666024 3200 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-ca-bundle\") pod \"cc646cc0-85f1-4307-a888-4c768b143ff9\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " Mar 4 00:49:58.667054 kubelet[3200]: I0304 00:49:58.666102 3200 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-nginx-config\") pod \"cc646cc0-85f1-4307-a888-4c768b143ff9\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " Mar 4 00:49:58.667054 kubelet[3200]: I0304 00:49:58.666127 3200 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6p4j\" (UniqueName: \"kubernetes.io/projected/cc646cc0-85f1-4307-a888-4c768b143ff9-kube-api-access-m6p4j\") pod \"cc646cc0-85f1-4307-a888-4c768b143ff9\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " Mar 4 00:49:58.667054 kubelet[3200]: I0304 00:49:58.666167 3200 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-backend-key-pair\") pod \"cc646cc0-85f1-4307-a888-4c768b143ff9\" (UID: \"cc646cc0-85f1-4307-a888-4c768b143ff9\") " Mar 4 00:49:58.667054 kubelet[3200]: I0304 00:49:58.666798 3200 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "cc646cc0-85f1-4307-a888-4c768b143ff9" (UID: "cc646cc0-85f1-4307-a888-4c768b143ff9"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:49:58.670076 kubelet[3200]: I0304 00:49:58.669617 3200 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc646cc0-85f1-4307-a888-4c768b143ff9" (UID: "cc646cc0-85f1-4307-a888-4c768b143ff9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:49:58.676656 kubelet[3200]: I0304 00:49:58.676553 3200 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc646cc0-85f1-4307-a888-4c768b143ff9" (UID: "cc646cc0-85f1-4307-a888-4c768b143ff9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 00:49:58.676775 kubelet[3200]: I0304 00:49:58.676752 3200 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc646cc0-85f1-4307-a888-4c768b143ff9-kube-api-access-m6p4j" (OuterVolumeSpecName: "kube-api-access-m6p4j") pod "cc646cc0-85f1-4307-a888-4c768b143ff9" (UID: "cc646cc0-85f1-4307-a888-4c768b143ff9"). InnerVolumeSpecName "kube-api-access-m6p4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 00:49:58.766934 kubelet[3200]: I0304 00:49:58.766893 3200 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-32bda88c6e\" DevicePath \"\"" Mar 4 00:49:58.766934 kubelet[3200]: I0304 00:49:58.766929 3200 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-whisker-ca-bundle\") on node \"ci-4081.3.6-n-32bda88c6e\" DevicePath \"\"" Mar 4 00:49:58.766934 kubelet[3200]: I0304 00:49:58.766939 3200 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc646cc0-85f1-4307-a888-4c768b143ff9-nginx-config\") on node \"ci-4081.3.6-n-32bda88c6e\" DevicePath \"\"" Mar 4 00:49:58.767101 kubelet[3200]: I0304 00:49:58.766948 3200 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6p4j\" (UniqueName: \"kubernetes.io/projected/cc646cc0-85f1-4307-a888-4c768b143ff9-kube-api-access-m6p4j\") on node \"ci-4081.3.6-n-32bda88c6e\" DevicePath \"\"" Mar 4 00:49:58.836740 systemd-networkd[1359]: cali84f86380b8d: Link UP Mar 4 00:49:58.838840 systemd-networkd[1359]: cali84f86380b8d: Gained carrier Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.645 [ERROR][4486] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.664 [INFO][4486] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0 calico-apiserver-58c464678b- calico-system 31c90154-2140-4ee5-b886-dc4e9943430b 882 0 2026-03-04 00:49:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c464678b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e calico-apiserver-58c464678b-tr8bm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali84f86380b8d [] [] }} ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.665 [INFO][4486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.699 [INFO][4511] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" HandleID="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.710 [INFO][4511] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" HandleID="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9480), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"calico-apiserver-58c464678b-tr8bm", "timestamp":"2026-03-04 00:49:58.699134786 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000262dc0)} Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.710 [INFO][4511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.710 [INFO][4511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.710 [INFO][4511] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.712 [INFO][4511] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.719 [INFO][4511] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.725 [INFO][4511] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.728 [INFO][4511] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.731 [INFO][4511] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.731 [INFO][4511] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.733 [INFO][4511] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.739 [INFO][4511] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.749 [INFO][4511] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.193/26] block=192.168.125.192/26 handle="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.750 [INFO][4511] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.193/26] handle="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.750 [INFO][4511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:58.904700 containerd[1718]: 2026-03-04 00:49:58.750 [INFO][4511] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.193/26] IPv6=[] ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" HandleID="k8s-pod-network.93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.754 [INFO][4486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"31c90154-2140-4ee5-b886-dc4e9943430b", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"calico-apiserver-58c464678b-tr8bm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f86380b8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.755 [INFO][4486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.193/32] ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.755 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84f86380b8d ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.843 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.847 [INFO][4486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"31c90154-2140-4ee5-b886-dc4e9943430b", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c", Pod:"calico-apiserver-58c464678b-tr8bm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f86380b8d", MAC:"e2:0d:a8:d5:9d:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:58.905223 containerd[1718]: 2026-03-04 00:49:58.879 [INFO][4486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c" Namespace="calico-system" Pod="calico-apiserver-58c464678b-tr8bm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:49:58.963532 systemd-networkd[1359]: cali7cc7f6b204b: Link UP Mar 4 00:49:58.964613 systemd-networkd[1359]: cali7cc7f6b204b: Gained carrier Mar 4 00:49:58.972516 systemd[1]: Removed slice kubepods-besteffort-podcc646cc0_85f1_4307_a888_4c768b143ff9.slice - libcontainer container kubepods-besteffort-podcc646cc0_85f1_4307_a888_4c768b143ff9.slice. Mar 4 00:49:59.050627 containerd[1718]: time="2026-03-04T00:49:59.045778177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.050627 containerd[1718]: time="2026-03-04T00:49:59.045833377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.050627 containerd[1718]: time="2026-03-04T00:49:59.045843937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.050627 containerd[1718]: time="2026-03-04T00:49:59.045915617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.650 [ERROR][4496] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.673 [INFO][4496] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0 goldmane-cccfbd5cf- calico-system a9862f74-c05c-453b-b7c5-c17945e70f61 881 0 2026-03-04 00:49:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e goldmane-cccfbd5cf-2lgfs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7cc7f6b204b [] [] }} ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.673 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.700 [INFO][4513] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" HandleID="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.718 [INFO][4513] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" HandleID="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"goldmane-cccfbd5cf-2lgfs", "timestamp":"2026-03-04 00:49:58.700556467 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000254580)} Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.718 [INFO][4513] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.750 [INFO][4513] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.750 [INFO][4513] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.820 [INFO][4513] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.839 [INFO][4513] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.869 [INFO][4513] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.880 [INFO][4513] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.885 [INFO][4513] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.885 [INFO][4513] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.888 [INFO][4513] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804 Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.903 [INFO][4513] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.915 [INFO][4513] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.194/26] block=192.168.125.192/26 handle="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.915 [INFO][4513] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.194/26] handle="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.917 [INFO][4513] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.051575 containerd[1718]: 2026-03-04 00:49:58.918 [INFO][4513] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.194/26] IPv6=[] ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" HandleID="k8s-pod-network.7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:58.930 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a9862f74-c05c-453b-b7c5-c17945e70f61", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"goldmane-cccfbd5cf-2lgfs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cc7f6b204b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:58.930 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.194/32] ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:58.930 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cc7f6b204b ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:58.975 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:58.985 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a9862f74-c05c-453b-b7c5-c17945e70f61", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804", Pod:"goldmane-cccfbd5cf-2lgfs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cc7f6b204b", MAC:"3a:26:d2:80:6b:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.052077 containerd[1718]: 2026-03-04 00:49:59.016 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804" Namespace="calico-system" Pod="goldmane-cccfbd5cf-2lgfs" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:49:59.084134 systemd-networkd[1359]: calicecd56fff34: Link UP Mar 4 00:49:59.084317 systemd-networkd[1359]: calicecd56fff34: Gained carrier Mar 4 00:49:59.103858 systemd[1]: Started cri-containerd-93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c.scope - libcontainer container 93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c. Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.770 [ERROR][4525] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.790 [INFO][4525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0 coredns-66bc5c9577- kube-system 51231700-2a49-4663-85fb-fd5115fc08f4 879 0 2026-03-04 00:49:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e coredns-66bc5c9577-mxljm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicecd56fff34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.790 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.878 [INFO][4573] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" HandleID="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.902 [INFO][4573] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" HandleID="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"coredns-66bc5c9577-mxljm", "timestamp":"2026-03-04 00:49:58.878368987 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001862c0)} Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.902 [INFO][4573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.916 [INFO][4573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.916 [INFO][4573] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.924 [INFO][4573] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.939 [INFO][4573] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.979 [INFO][4573] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.988 [INFO][4573] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.995 [INFO][4573] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:58.996 [INFO][4573] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.009 [INFO][4573] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365 Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.037 [INFO][4573] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.055 [INFO][4573] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.195/26] block=192.168.125.192/26 handle="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.056 [INFO][4573] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.195/26] handle="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.056 [INFO][4573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.130038 containerd[1718]: 2026-03-04 00:49:59.056 [INFO][4573] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.195/26] IPv6=[] ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" HandleID="k8s-pod-network.11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130943 containerd[1718]: 2026-03-04 00:49:59.073 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"51231700-2a49-4663-85fb-fd5115fc08f4", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"coredns-66bc5c9577-mxljm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicecd56fff34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.130943 containerd[1718]: 2026-03-04 00:49:59.074 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.195/32] ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130943 containerd[1718]: 2026-03-04 00:49:59.075 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicecd56fff34 ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130943 containerd[1718]: 2026-03-04 00:49:59.087 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.130943 containerd[1718]: 2026-03-04 00:49:59.089 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"51231700-2a49-4663-85fb-fd5115fc08f4", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365", Pod:"coredns-66bc5c9577-mxljm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicecd56fff34", MAC:"36:3d:31:14:da:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.131132 containerd[1718]: 2026-03-04 00:49:59.122 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365" Namespace="kube-system" Pod="coredns-66bc5c9577-mxljm" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:49:59.139043 containerd[1718]: time="2026-03-04T00:49:59.138802060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.139043 containerd[1718]: time="2026-03-04T00:49:59.138857621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.139043 containerd[1718]: time="2026-03-04T00:49:59.138871541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.139043 containerd[1718]: time="2026-03-04T00:49:59.138964341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.141941 systemd-networkd[1359]: cali1a0ca90d0a5: Link UP Mar 4 00:49:59.142935 systemd-networkd[1359]: cali1a0ca90d0a5: Gained carrier Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:58.793 [ERROR][4535] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:58.822 [INFO][4535] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0 calico-kube-controllers-56b5c6bcf- calico-system e6f4eb03-2699-45e9-b386-310dd3f95ccd 877 0 2026-03-04 00:49:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56b5c6bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e calico-kube-controllers-56b5c6bcf-xvchl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a0ca90d0a5 [] [] }} ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:58.823 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:58.998 [INFO][4604] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" HandleID="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.033 [INFO][4604] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" HandleID="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f7f60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"calico-kube-controllers-56b5c6bcf-xvchl", "timestamp":"2026-03-04 00:49:58.998747215 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e51e0)} Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.033 [INFO][4604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.058 [INFO][4604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.058 [INFO][4604] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.061 [INFO][4604] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.073 [INFO][4604] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.083 [INFO][4604] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.093 [INFO][4604] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.098 [INFO][4604] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.098 [INFO][4604] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.101 [INFO][4604] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523 Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.109 [INFO][4604] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4604] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.196/26] block=192.168.125.192/26 handle="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4604] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.196/26] handle="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.171222 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4604] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.196/26] IPv6=[] ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" HandleID="k8s-pod-network.0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.134 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0", GenerateName:"calico-kube-controllers-56b5c6bcf-", Namespace:"calico-system", SelfLink:"", UID:"e6f4eb03-2699-45e9-b386-310dd3f95ccd", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56b5c6bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"calico-kube-controllers-56b5c6bcf-xvchl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a0ca90d0a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.135 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.196/32] ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.135 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a0ca90d0a5 ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.142 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.142 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0", GenerateName:"calico-kube-controllers-56b5c6bcf-", Namespace:"calico-system", SelfLink:"", UID:"e6f4eb03-2699-45e9-b386-310dd3f95ccd", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56b5c6bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523", Pod:"calico-kube-controllers-56b5c6bcf-xvchl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a0ca90d0a5", MAC:"02:e8:ed:83:4a:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.172927 containerd[1718]: 2026-03-04 00:49:59.162 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523" Namespace="calico-system" Pod="calico-kube-controllers-56b5c6bcf-xvchl" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:49:59.216667 containerd[1718]: time="2026-03-04T00:49:59.215313809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.216667 containerd[1718]: time="2026-03-04T00:49:59.215678529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.216667 containerd[1718]: time="2026-03-04T00:49:59.215695769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.216667 containerd[1718]: time="2026-03-04T00:49:59.216099810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.235126 containerd[1718]: time="2026-03-04T00:49:59.234691747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.235126 containerd[1718]: time="2026-03-04T00:49:59.234750307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.235126 containerd[1718]: time="2026-03-04T00:49:59.234762187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.236310 containerd[1718]: time="2026-03-04T00:49:59.235870228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.275272 systemd-networkd[1359]: calif8699495486: Link UP Mar 4 00:49:59.276554 systemd-networkd[1359]: calif8699495486: Gained carrier Mar 4 00:49:59.321083 systemd[1]: Started cri-containerd-7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804.scope - libcontainer container 7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804. Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:58.795 [ERROR][4547] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:58.826 [INFO][4547] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0 calico-apiserver-58c464678b- calico-system 0012b3a2-8c12-4b79-8f51-74e600877b09 880 0 2026-03-04 00:49:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c464678b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e calico-apiserver-58c464678b-zm8w9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif8699495486 [] [] }} ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:58.827 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.048 [INFO][4598] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" HandleID="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.074 [INFO][4598] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" HandleID="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e9dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"calico-apiserver-58c464678b-zm8w9", "timestamp":"2026-03-04 00:49:59.048499539 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000c82c0)} Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.074 [INFO][4598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.125 [INFO][4598] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.166 [INFO][4598] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.179 [INFO][4598] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.202 [INFO][4598] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.207 [INFO][4598] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.212 [INFO][4598] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.212 [INFO][4598] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.216 [INFO][4598] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52 Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.227 [INFO][4598] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.247 [INFO][4598] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.197/26] block=192.168.125.192/26 handle="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.247 [INFO][4598] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.197/26] handle="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.247 [INFO][4598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.324211 containerd[1718]: 2026-03-04 00:49:59.247 [INFO][4598] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.197/26] IPv6=[] ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" HandleID="k8s-pod-network.1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.257 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"0012b3a2-8c12-4b79-8f51-74e600877b09", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"calico-apiserver-58c464678b-zm8w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif8699495486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.257 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.197/32] ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.257 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8699495486 ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.276 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.278 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"0012b3a2-8c12-4b79-8f51-74e600877b09", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52", Pod:"calico-apiserver-58c464678b-zm8w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif8699495486", MAC:"56:30:89:6f:87:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.324776 containerd[1718]: 2026-03-04 00:49:59.315 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52" Namespace="calico-system" Pod="calico-apiserver-58c464678b-zm8w9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:49:59.333445 systemd[1]: Started cri-containerd-0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523.scope - libcontainer container 0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523. Mar 4 00:49:59.334845 systemd[1]: Started cri-containerd-11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365.scope - libcontainer container 11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365. Mar 4 00:49:59.340649 systemd[1]: Created slice kubepods-besteffort-pod47789040_a916_49c4_96e1_04f9669ff16a.slice - libcontainer container kubepods-besteffort-pod47789040_a916_49c4_96e1_04f9669ff16a.slice. Mar 4 00:49:59.372925 containerd[1718]: time="2026-03-04T00:49:59.371884510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.372925 containerd[1718]: time="2026-03-04T00:49:59.371943470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.372925 containerd[1718]: time="2026-03-04T00:49:59.371959030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.372925 containerd[1718]: time="2026-03-04T00:49:59.372048630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.373105 kubelet[3200]: I0304 00:49:59.373046 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/47789040-a916-49c4-96e1-04f9669ff16a-nginx-config\") pod \"whisker-7596fbcd85-zk7k9\" (UID: \"47789040-a916-49c4-96e1-04f9669ff16a\") " pod="calico-system/whisker-7596fbcd85-zk7k9" Mar 4 00:49:59.373105 kubelet[3200]: I0304 00:49:59.373089 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smhv2\" (UniqueName: \"kubernetes.io/projected/47789040-a916-49c4-96e1-04f9669ff16a-kube-api-access-smhv2\") pod \"whisker-7596fbcd85-zk7k9\" (UID: \"47789040-a916-49c4-96e1-04f9669ff16a\") " pod="calico-system/whisker-7596fbcd85-zk7k9" Mar 4 00:49:59.373199 kubelet[3200]: I0304 00:49:59.373110 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47789040-a916-49c4-96e1-04f9669ff16a-whisker-backend-key-pair\") pod \"whisker-7596fbcd85-zk7k9\" (UID: \"47789040-a916-49c4-96e1-04f9669ff16a\") " pod="calico-system/whisker-7596fbcd85-zk7k9" Mar 4 00:49:59.373199 kubelet[3200]: I0304 00:49:59.373131 3200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47789040-a916-49c4-96e1-04f9669ff16a-whisker-ca-bundle\") pod \"whisker-7596fbcd85-zk7k9\" (UID: \"47789040-a916-49c4-96e1-04f9669ff16a\") " pod="calico-system/whisker-7596fbcd85-zk7k9" Mar 4 00:49:59.413526 systemd[1]: run-netns-cni\x2d9343ec38\x2d8e0c\x2db993\x2d2b3b\x2d5ff08dcb4612.mount: Deactivated successfully. Mar 4 00:49:59.413970 systemd[1]: run-netns-cni\x2d679c511a\x2d05b8\x2dcd92\x2d80d3\x2d4e09ad9769a1.mount: Deactivated successfully. Mar 4 00:49:59.414019 systemd[1]: run-netns-cni\x2d2e3dc269\x2db8c1\x2d7be5\x2d1e8a\x2d1a9c4f0c5308.mount: Deactivated successfully. Mar 4 00:49:59.414069 systemd[1]: var-lib-kubelet-pods-cc646cc0\x2d85f1\x2d4307\x2da888\x2d4c768b143ff9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6p4j.mount: Deactivated successfully. Mar 4 00:49:59.414123 systemd[1]: var-lib-kubelet-pods-cc646cc0\x2d85f1\x2d4307\x2da888\x2d4c768b143ff9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 00:49:59.421029 systemd-networkd[1359]: cali7b2fab2036a: Link UP Mar 4 00:49:59.423488 systemd-networkd[1359]: cali7b2fab2036a: Gained carrier Mar 4 00:49:59.457263 systemd[1]: Started cri-containerd-1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52.scope - libcontainer container 1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52. Mar 4 00:49:59.465103 containerd[1718]: time="2026-03-04T00:49:59.464525513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-tr8bm,Uid:31c90154-2140-4ee5-b886-dc4e9943430b,Namespace:calico-system,Attempt:1,} returns sandbox id \"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c\"" Mar 4 00:49:59.475120 containerd[1718]: time="2026-03-04T00:49:59.471951559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:58.809 [ERROR][4557] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:58.842 [INFO][4557] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0 coredns-66bc5c9577- kube-system 42250f5b-0fe9-4ffb-8a66-17380f81c557 878 0 2026-03-04 00:49:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e coredns-66bc5c9577-6znkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b2fab2036a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:58.842 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.093 [INFO][4614] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" HandleID="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.133 [INFO][4614] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" HandleID="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006147a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"coredns-66bc5c9577-6znkz", "timestamp":"2026-03-04 00:49:59.09367434 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004694a0)} Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.133 [INFO][4614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.247 [INFO][4614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.248 [INFO][4614] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.272 [INFO][4614] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.293 [INFO][4614] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.325 [INFO][4614] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.329 [INFO][4614] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.335 [INFO][4614] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.335 [INFO][4614] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.344 [INFO][4614] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95 Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.363 [INFO][4614] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.380 [INFO][4614] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.198/26] block=192.168.125.192/26 handle="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.380 [INFO][4614] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.198/26] handle="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.380 [INFO][4614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.475120 containerd[1718]: 2026-03-04 00:49:59.380 [INFO][4614] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.198/26] IPv6=[] ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" HandleID="k8s-pod-network.70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475693 containerd[1718]: 2026-03-04 00:49:59.386 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"42250f5b-0fe9-4ffb-8a66-17380f81c557", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"coredns-66bc5c9577-6znkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b2fab2036a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.475693 containerd[1718]: 2026-03-04 00:49:59.386 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.198/32] ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475693 containerd[1718]: 2026-03-04 00:49:59.386 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b2fab2036a ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475693 containerd[1718]: 2026-03-04 00:49:59.436 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.475693 containerd[1718]: 2026-03-04 00:49:59.438 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"42250f5b-0fe9-4ffb-8a66-17380f81c557", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95", Pod:"coredns-66bc5c9577-6znkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b2fab2036a", MAC:"66:ea:df:c2:fd:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.475861 containerd[1718]: 2026-03-04 00:49:59.463 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95" Namespace="kube-system" Pod="coredns-66bc5c9577-6znkz" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:49:59.495130 containerd[1718]: time="2026-03-04T00:49:59.495089420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mxljm,Uid:51231700-2a49-4663-85fb-fd5115fc08f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365\"" Mar 4 00:49:59.515277 containerd[1718]: time="2026-03-04T00:49:59.514949358Z" level=info msg="CreateContainer within sandbox \"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:49:59.578463 containerd[1718]: time="2026-03-04T00:49:59.577847734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:49:59.578463 containerd[1718]: time="2026-03-04T00:49:59.577900095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:49:59.578463 containerd[1718]: time="2026-03-04T00:49:59.577917015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.578463 containerd[1718]: time="2026-03-04T00:49:59.578000615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:49:59.585576 containerd[1718]: time="2026-03-04T00:49:59.583985740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-2lgfs,Uid:a9862f74-c05c-453b-b7c5-c17945e70f61,Namespace:calico-system,Attempt:1,} returns sandbox id \"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804\"" Mar 4 00:49:59.607446 containerd[1718]: time="2026-03-04T00:49:59.607257521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56b5c6bcf-xvchl,Uid:e6f4eb03-2699-45e9-b386-310dd3f95ccd,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523\"" Mar 4 00:49:59.639905 containerd[1718]: time="2026-03-04T00:49:59.639863710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c464678b-zm8w9,Uid:0012b3a2-8c12-4b79-8f51-74e600877b09,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52\"" Mar 4 00:49:59.641288 containerd[1718]: time="2026-03-04T00:49:59.640859511Z" level=info msg="CreateContainer within sandbox \"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eae22803a5d95986056f825458d04f99f040d7e09c8b71ae1bc4e2cfb4037c1b\"" Mar 4 00:49:59.645753 containerd[1718]: time="2026-03-04T00:49:59.643518073Z" level=info msg="StartContainer for \"eae22803a5d95986056f825458d04f99f040d7e09c8b71ae1bc4e2cfb4037c1b\"" Mar 4 00:49:59.654103 containerd[1718]: time="2026-03-04T00:49:59.654058443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596fbcd85-zk7k9,Uid:47789040-a916-49c4-96e1-04f9669ff16a,Namespace:calico-system,Attempt:0,}" Mar 4 00:49:59.654815 systemd[1]: Started cri-containerd-70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95.scope - libcontainer container 70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95. Mar 4 00:49:59.707107 systemd[1]: Started cri-containerd-eae22803a5d95986056f825458d04f99f040d7e09c8b71ae1bc4e2cfb4037c1b.scope - libcontainer container eae22803a5d95986056f825458d04f99f040d7e09c8b71ae1bc4e2cfb4037c1b. Mar 4 00:49:59.718245 containerd[1718]: time="2026-03-04T00:49:59.716960219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6znkz,Uid:42250f5b-0fe9-4ffb-8a66-17380f81c557,Namespace:kube-system,Attempt:1,} returns sandbox id \"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95\"" Mar 4 00:49:59.729972 containerd[1718]: time="2026-03-04T00:49:59.729772151Z" level=info msg="CreateContainer within sandbox \"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:49:59.764485 containerd[1718]: time="2026-03-04T00:49:59.763943981Z" level=info msg="StartContainer for \"eae22803a5d95986056f825458d04f99f040d7e09c8b71ae1bc4e2cfb4037c1b\" returns successfully" Mar 4 00:49:59.812373 containerd[1718]: time="2026-03-04T00:49:59.812325145Z" level=info msg="CreateContainer within sandbox \"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6b5bec65629ac5761f4ce9d4883549b40e0adf1c9a6494fe9a212cb68a86263\"" Mar 4 00:49:59.815545 containerd[1718]: time="2026-03-04T00:49:59.815498268Z" level=info msg="StartContainer for \"a6b5bec65629ac5761f4ce9d4883549b40e0adf1c9a6494fe9a212cb68a86263\"" Mar 4 00:49:59.860751 systemd-networkd[1359]: cali84f86380b8d: Gained IPv6LL Mar 4 00:49:59.876108 systemd[1]: Started cri-containerd-a6b5bec65629ac5761f4ce9d4883549b40e0adf1c9a6494fe9a212cb68a86263.scope - libcontainer container a6b5bec65629ac5761f4ce9d4883549b40e0adf1c9a6494fe9a212cb68a86263. Mar 4 00:49:59.941598 containerd[1718]: time="2026-03-04T00:49:59.941038260Z" level=info msg="StartContainer for \"a6b5bec65629ac5761f4ce9d4883549b40e0adf1c9a6494fe9a212cb68a86263\" returns successfully" Mar 4 00:49:59.955713 systemd-networkd[1359]: cali341d9f6f3f6: Link UP Mar 4 00:49:59.956921 systemd-networkd[1359]: cali341d9f6f3f6: Gained carrier Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.775 [ERROR][5012] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.794 [INFO][5012] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0 whisker-7596fbcd85- calico-system 47789040-a916-49c4-96e1-04f9669ff16a 913 0 2026-03-04 00:49:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7596fbcd85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e whisker-7596fbcd85-zk7k9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali341d9f6f3f6 [] [] }} ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.794 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.862 [INFO][5036] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" HandleID="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.887 [INFO][5036] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" HandleID="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"whisker-7596fbcd85-zk7k9", "timestamp":"2026-03-04 00:49:59.86226575 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000252580)} Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.887 [INFO][5036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.887 [INFO][5036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.887 [INFO][5036] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.891 [INFO][5036] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.897 [INFO][5036] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.905 [INFO][5036] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.909 [INFO][5036] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.912 [INFO][5036] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.912 [INFO][5036] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.915 [INFO][5036] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8 Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.927 [INFO][5036] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.944 [INFO][5036] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.199/26] block=192.168.125.192/26 handle="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.944 [INFO][5036] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.199/26] handle="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.944 [INFO][5036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:49:59.975189 containerd[1718]: 2026-03-04 00:49:59.944 [INFO][5036] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.199/26] IPv6=[] ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" HandleID="k8s-pod-network.d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.949 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0", GenerateName:"whisker-7596fbcd85-", Namespace:"calico-system", SelfLink:"", UID:"47789040-a916-49c4-96e1-04f9669ff16a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7596fbcd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"whisker-7596fbcd85-zk7k9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali341d9f6f3f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.949 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.199/32] ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.950 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali341d9f6f3f6 ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.956 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.956 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0", GenerateName:"whisker-7596fbcd85-", Namespace:"calico-system", SelfLink:"", UID:"47789040-a916-49c4-96e1-04f9669ff16a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7596fbcd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8", Pod:"whisker-7596fbcd85-zk7k9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali341d9f6f3f6", MAC:"e2:77:bb:9e:db:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:49:59.976996 containerd[1718]: 2026-03-04 00:49:59.970 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8" Namespace="calico-system" Pod="whisker-7596fbcd85-zk7k9" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--7596fbcd85--zk7k9-eth0" Mar 4 00:50:00.017699 containerd[1718]: time="2026-03-04T00:50:00.016022528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:50:00.017699 containerd[1718]: time="2026-03-04T00:50:00.016104248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:50:00.017699 containerd[1718]: time="2026-03-04T00:50:00.016117128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:50:00.017699 containerd[1718]: time="2026-03-04T00:50:00.016197208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:50:00.034775 systemd[1]: Started cri-containerd-d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8.scope - libcontainer container d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8. Mar 4 00:50:00.066861 containerd[1718]: time="2026-03-04T00:50:00.066806493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596fbcd85-zk7k9,Uid:47789040-a916-49c4-96e1-04f9669ff16a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8\"" Mar 4 00:50:00.273078 kubelet[3200]: I0304 00:50:00.272573 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mxljm" podStartSLOduration=36.272542998 podStartE2EDuration="36.272542998s" podCreationTimestamp="2026-03-04 00:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:50:00.272539918 +0000 UTC m=+41.525605182" watchObservedRunningTime="2026-03-04 00:50:00.272542998 +0000 UTC m=+41.525608302" Mar 4 00:50:00.820782 systemd-networkd[1359]: calicecd56fff34: Gained IPv6LL Mar 4 00:50:00.821410 systemd-networkd[1359]: cali1a0ca90d0a5: Gained IPv6LL Mar 4 00:50:00.884782 systemd-networkd[1359]: cali7cc7f6b204b: Gained IPv6LL Mar 4 00:50:00.885062 systemd-networkd[1359]: calif8699495486: Gained IPv6LL Mar 4 00:50:00.946081 kubelet[3200]: I0304 00:50:00.945772 3200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc646cc0-85f1-4307-a888-4c768b143ff9" path="/var/lib/kubelet/pods/cc646cc0-85f1-4307-a888-4c768b143ff9/volumes" Mar 4 00:50:01.012698 systemd-networkd[1359]: cali341d9f6f3f6: Gained IPv6LL Mar 4 00:50:01.268665 systemd-networkd[1359]: cali7b2fab2036a: Gained IPv6LL Mar 4 00:50:02.791834 containerd[1718]: time="2026-03-04T00:50:02.791260815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:02.794723 containerd[1718]: time="2026-03-04T00:50:02.794698696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 4 00:50:02.798525 containerd[1718]: time="2026-03-04T00:50:02.798496338Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:02.803448 containerd[1718]: time="2026-03-04T00:50:02.803421420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:02.804508 containerd[1718]: time="2026-03-04T00:50:02.804479621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.33078234s" Mar 4 00:50:02.804578 containerd[1718]: time="2026-03-04T00:50:02.804513101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:50:02.807523 containerd[1718]: time="2026-03-04T00:50:02.806727862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 00:50:02.814565 containerd[1718]: time="2026-03-04T00:50:02.814046985Z" level=info msg="CreateContainer within sandbox \"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:50:02.858985 containerd[1718]: time="2026-03-04T00:50:02.858942926Z" level=info msg="CreateContainer within sandbox \"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4793fecf4c613572e8dd7b00d7d6f755f6aa47fbacff727a7eb8cee881990664\"" Mar 4 00:50:02.859761 containerd[1718]: time="2026-03-04T00:50:02.859713766Z" level=info msg="StartContainer for \"4793fecf4c613572e8dd7b00d7d6f755f6aa47fbacff727a7eb8cee881990664\"" Mar 4 00:50:02.898752 systemd[1]: Started cri-containerd-4793fecf4c613572e8dd7b00d7d6f755f6aa47fbacff727a7eb8cee881990664.scope - libcontainer container 4793fecf4c613572e8dd7b00d7d6f755f6aa47fbacff727a7eb8cee881990664. Mar 4 00:50:02.956606 containerd[1718]: time="2026-03-04T00:50:02.956518251Z" level=info msg="StartContainer for \"4793fecf4c613572e8dd7b00d7d6f755f6aa47fbacff727a7eb8cee881990664\" returns successfully" Mar 4 00:50:02.968177 kubelet[3200]: I0304 00:50:02.967692 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:50:03.205056 kubelet[3200]: I0304 00:50:03.205004 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6znkz" podStartSLOduration=39.204984765 podStartE2EDuration="39.204984765s" podCreationTimestamp="2026-03-04 00:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:50:00.326239126 +0000 UTC m=+41.579304390" watchObservedRunningTime="2026-03-04 00:50:03.204984765 +0000 UTC m=+44.458050069" Mar 4 00:50:04.191089 kubelet[3200]: I0304 00:50:04.191052 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:50:04.715552 kubelet[3200]: I0304 00:50:04.715359 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:50:04.731099 kubelet[3200]: I0304 00:50:04.731034 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-58c464678b-tr8bm" podStartSLOduration=24.392431963 podStartE2EDuration="27.731018189s" podCreationTimestamp="2026-03-04 00:49:37 +0000 UTC" firstStartedPulling="2026-03-04 00:49:59.467205995 +0000 UTC m=+40.720271299" lastFinishedPulling="2026-03-04 00:50:02.805792221 +0000 UTC m=+44.058857525" observedRunningTime="2026-03-04 00:50:03.205485805 +0000 UTC m=+44.458551069" watchObservedRunningTime="2026-03-04 00:50:04.731018189 +0000 UTC m=+45.984083493" Mar 4 00:50:05.024715 kernel: calico-node[5326]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 00:50:05.573544 systemd-networkd[1359]: vxlan.calico: Link UP Mar 4 00:50:05.573551 systemd-networkd[1359]: vxlan.calico: Gained carrier Mar 4 00:50:06.240532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526162175.mount: Deactivated successfully. Mar 4 00:50:06.604077 containerd[1718]: time="2026-03-04T00:50:06.603941224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:06.613868 containerd[1718]: time="2026-03-04T00:50:06.613590113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 4 00:50:06.618966 containerd[1718]: time="2026-03-04T00:50:06.617882117Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:06.622921 containerd[1718]: time="2026-03-04T00:50:06.622894121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:06.623730 containerd[1718]: time="2026-03-04T00:50:06.623700202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.81693538s" Mar 4 00:50:06.623996 containerd[1718]: time="2026-03-04T00:50:06.623977522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 4 00:50:06.625828 containerd[1718]: time="2026-03-04T00:50:06.625009843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 00:50:06.633330 containerd[1718]: time="2026-03-04T00:50:06.633297410Z" level=info msg="CreateContainer within sandbox \"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 00:50:06.668158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819382891.mount: Deactivated successfully. Mar 4 00:50:06.680586 containerd[1718]: time="2026-03-04T00:50:06.680526852Z" level=info msg="CreateContainer within sandbox \"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da\"" Mar 4 00:50:06.682488 containerd[1718]: time="2026-03-04T00:50:06.681312973Z" level=info msg="StartContainer for \"af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da\"" Mar 4 00:50:06.735731 systemd[1]: Started cri-containerd-af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da.scope - libcontainer container af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da. Mar 4 00:50:06.774965 containerd[1718]: time="2026-03-04T00:50:06.774857256Z" level=info msg="StartContainer for \"af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da\" returns successfully" Mar 4 00:50:07.156717 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Mar 4 00:50:07.218049 kubelet[3200]: I0304 00:50:07.217972 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-2lgfs" podStartSLOduration=23.204147329 podStartE2EDuration="30.217957288s" podCreationTimestamp="2026-03-04 00:49:37 +0000 UTC" firstStartedPulling="2026-03-04 00:49:59.610868044 +0000 UTC m=+40.863933348" lastFinishedPulling="2026-03-04 00:50:06.624678003 +0000 UTC m=+47.877743307" observedRunningTime="2026-03-04 00:50:07.217765488 +0000 UTC m=+48.470830792" watchObservedRunningTime="2026-03-04 00:50:07.217957288 +0000 UTC m=+48.471022592" Mar 4 00:50:08.242365 systemd[1]: run-containerd-runc-k8s.io-af8709e8033c747248679ccfa0755698cc65d06d166f946025788b21623a35da-runc.hs2TQi.mount: Deactivated successfully. Mar 4 00:50:08.953751 containerd[1718]: time="2026-03-04T00:50:08.951688807Z" level=info msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.004 [INFO][5575] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.005 [INFO][5575] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" iface="eth0" netns="/var/run/netns/cni-c86385f5-40a2-6265-044c-8db971fca8a5" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.005 [INFO][5575] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" iface="eth0" netns="/var/run/netns/cni-c86385f5-40a2-6265-044c-8db971fca8a5" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.006 [INFO][5575] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" iface="eth0" netns="/var/run/netns/cni-c86385f5-40a2-6265-044c-8db971fca8a5" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.006 [INFO][5575] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.006 [INFO][5575] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.023 [INFO][5582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.024 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.024 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.033 [WARNING][5582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.033 [INFO][5582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.035 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:09.043091 containerd[1718]: 2026-03-04 00:50:09.038 [INFO][5575] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:09.043091 containerd[1718]: time="2026-03-04T00:50:09.040620131Z" level=info msg="TearDown network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" successfully" Mar 4 00:50:09.043091 containerd[1718]: time="2026-03-04T00:50:09.040650611Z" level=info msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" returns successfully" Mar 4 00:50:09.045201 systemd[1]: run-netns-cni\x2dc86385f5\x2d40a2\x2d6265\x2d044c\x2d8db971fca8a5.mount: Deactivated successfully. Mar 4 00:50:09.481837 containerd[1718]: time="2026-03-04T00:50:09.481803509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c52z8,Uid:e7af50d6-af42-44be-9485-418ddf8e697a,Namespace:calico-system,Attempt:1,}" Mar 4 00:50:09.673955 systemd-networkd[1359]: cali7fb0acc22fe: Link UP Mar 4 00:50:09.675154 systemd-networkd[1359]: cali7fb0acc22fe: Gained carrier Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.590 [INFO][5592] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0 csi-node-driver- calico-system e7af50d6-af42-44be-9485-418ddf8e697a 989 0 2026-03-04 00:49:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-32bda88c6e csi-node-driver-c52z8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7fb0acc22fe [] [] }} ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.590 [INFO][5592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.621 [INFO][5604] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" HandleID="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.632 [INFO][5604] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" HandleID="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32bda88c6e", "pod":"csi-node-driver-c52z8", "timestamp":"2026-03-04 00:50:09.621364242 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32bda88c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e2dc0)} Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.632 [INFO][5604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.632 [INFO][5604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.632 [INFO][5604] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32bda88c6e' Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.635 [INFO][5604] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.639 [INFO][5604] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.643 [INFO][5604] ipam/ipam.go 526: Trying affinity for 192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.645 [INFO][5604] ipam/ipam.go 160: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.647 [INFO][5604] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.647 [INFO][5604] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.649 [INFO][5604] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470 Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.654 [INFO][5604] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.665 [INFO][5604] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.125.200/26] block=192.168.125.192/26 handle="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.665 [INFO][5604] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.125.200/26] handle="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" host="ci-4081.3.6-n-32bda88c6e" Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.665 [INFO][5604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:09.700504 containerd[1718]: 2026-03-04 00:50:09.665 [INFO][5604] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.125.200/26] IPv6=[] ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" HandleID="k8s-pod-network.697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.668 [INFO][5592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7af50d6-af42-44be-9485-418ddf8e697a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"", Pod:"csi-node-driver-c52z8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb0acc22fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.668 [INFO][5592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.200/32] ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.668 [INFO][5592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fb0acc22fe ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.675 [INFO][5592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.677 [INFO][5592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7af50d6-af42-44be-9485-418ddf8e697a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470", Pod:"csi-node-driver-c52z8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb0acc22fe", MAC:"da:85:8e:f5:e7:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:09.702796 containerd[1718]: 2026-03-04 00:50:09.695 [INFO][5592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470" Namespace="calico-system" Pod="csi-node-driver-c52z8" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:09.759834 containerd[1718]: time="2026-03-04T00:50:09.759399973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:50:09.759834 containerd[1718]: time="2026-03-04T00:50:09.759455693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:50:09.759834 containerd[1718]: time="2026-03-04T00:50:09.759475853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:50:09.759834 containerd[1718]: time="2026-03-04T00:50:09.759664013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:50:09.802745 systemd[1]: Started cri-containerd-697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470.scope - libcontainer container 697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470. Mar 4 00:50:09.853260 containerd[1718]: time="2026-03-04T00:50:09.853222502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c52z8,Uid:e7af50d6-af42-44be-9485-418ddf8e697a,Namespace:calico-system,Attempt:1,} returns sandbox id \"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470\"" Mar 4 00:50:10.295220 containerd[1718]: time="2026-03-04T00:50:10.295169881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:10.298483 containerd[1718]: time="2026-03-04T00:50:10.298422804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 4 00:50:10.302572 containerd[1718]: time="2026-03-04T00:50:10.302246447Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:10.307454 containerd[1718]: time="2026-03-04T00:50:10.307416372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:10.308220 containerd[1718]: time="2026-03-04T00:50:10.308192333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.682256889s" Mar 4 00:50:10.308319 containerd[1718]: time="2026-03-04T00:50:10.308301933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 4 00:50:10.310078 containerd[1718]: time="2026-03-04T00:50:10.310048855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:50:10.343216 containerd[1718]: time="2026-03-04T00:50:10.343040886Z" level=info msg="CreateContainer within sandbox \"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 00:50:10.602727 containerd[1718]: time="2026-03-04T00:50:10.602602012Z" level=info msg="CreateContainer within sandbox \"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"be7b97c335f25066e0e9d3e110f8bbbc838ad395074f3dbb6186ad526507011f\"" Mar 4 00:50:10.603722 containerd[1718]: time="2026-03-04T00:50:10.603691933Z" level=info msg="StartContainer for \"be7b97c335f25066e0e9d3e110f8bbbc838ad395074f3dbb6186ad526507011f\"" Mar 4 00:50:10.640717 systemd[1]: Started cri-containerd-be7b97c335f25066e0e9d3e110f8bbbc838ad395074f3dbb6186ad526507011f.scope - libcontainer container be7b97c335f25066e0e9d3e110f8bbbc838ad395074f3dbb6186ad526507011f. Mar 4 00:50:10.654479 containerd[1718]: time="2026-03-04T00:50:10.654438101Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:10.661105 containerd[1718]: time="2026-03-04T00:50:10.661068387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 00:50:10.664140 containerd[1718]: time="2026-03-04T00:50:10.664015910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 353.843375ms" Mar 4 00:50:10.664140 containerd[1718]: time="2026-03-04T00:50:10.664050950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:50:10.665212 containerd[1718]: time="2026-03-04T00:50:10.665056991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 00:50:10.673998 containerd[1718]: time="2026-03-04T00:50:10.673876760Z" level=info msg="CreateContainer within sandbox \"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:50:10.684461 containerd[1718]: time="2026-03-04T00:50:10.684419930Z" level=info msg="StartContainer for \"be7b97c335f25066e0e9d3e110f8bbbc838ad395074f3dbb6186ad526507011f\" returns successfully" Mar 4 00:50:10.727981 containerd[1718]: time="2026-03-04T00:50:10.727868131Z" level=info msg="CreateContainer within sandbox \"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"362aa63dddfb34dde590b6eb9d90948e99a66984ddd0a62fb743057de2f6db38\"" Mar 4 00:50:10.729754 containerd[1718]: time="2026-03-04T00:50:10.728695972Z" level=info msg="StartContainer for \"362aa63dddfb34dde590b6eb9d90948e99a66984ddd0a62fb743057de2f6db38\"" Mar 4 00:50:10.740824 systemd-networkd[1359]: cali7fb0acc22fe: Gained IPv6LL Mar 4 00:50:10.754732 systemd[1]: Started cri-containerd-362aa63dddfb34dde590b6eb9d90948e99a66984ddd0a62fb743057de2f6db38.scope - libcontainer container 362aa63dddfb34dde590b6eb9d90948e99a66984ddd0a62fb743057de2f6db38. Mar 4 00:50:10.804645 containerd[1718]: time="2026-03-04T00:50:10.804547683Z" level=info msg="StartContainer for \"362aa63dddfb34dde590b6eb9d90948e99a66984ddd0a62fb743057de2f6db38\" returns successfully" Mar 4 00:50:11.259538 kubelet[3200]: I0304 00:50:11.259478 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56b5c6bcf-xvchl" podStartSLOduration=21.567230191 podStartE2EDuration="32.259461755s" podCreationTimestamp="2026-03-04 00:49:39 +0000 UTC" firstStartedPulling="2026-03-04 00:49:59.61692885 +0000 UTC m=+40.869994114" lastFinishedPulling="2026-03-04 00:50:10.309160334 +0000 UTC m=+51.562225678" observedRunningTime="2026-03-04 00:50:11.259049634 +0000 UTC m=+52.512114938" watchObservedRunningTime="2026-03-04 00:50:11.259461755 +0000 UTC m=+52.512527099" Mar 4 00:50:11.259969 kubelet[3200]: I0304 00:50:11.259576 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-58c464678b-zm8w9" podStartSLOduration=23.241605841 podStartE2EDuration="34.259570195s" podCreationTimestamp="2026-03-04 00:49:37 +0000 UTC" firstStartedPulling="2026-03-04 00:49:59.646971957 +0000 UTC m=+40.900037261" lastFinishedPulling="2026-03-04 00:50:10.664936311 +0000 UTC m=+51.918001615" observedRunningTime="2026-03-04 00:50:11.238349535 +0000 UTC m=+52.491414839" watchObservedRunningTime="2026-03-04 00:50:11.259570195 +0000 UTC m=+52.512635499" Mar 4 00:50:12.217704 kubelet[3200]: I0304 00:50:12.217245 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:50:12.526646 containerd[1718]: time="2026-03-04T00:50:12.526323796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:12.532511 containerd[1718]: time="2026-03-04T00:50:12.532472322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 4 00:50:12.539598 containerd[1718]: time="2026-03-04T00:50:12.538671208Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:12.543964 containerd[1718]: time="2026-03-04T00:50:12.543919933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:12.553828 containerd[1718]: time="2026-03-04T00:50:12.553761862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.888670951s" Mar 4 00:50:12.553828 containerd[1718]: time="2026-03-04T00:50:12.553802702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 4 00:50:12.556308 containerd[1718]: time="2026-03-04T00:50:12.556276024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 00:50:12.563482 containerd[1718]: time="2026-03-04T00:50:12.563165831Z" level=info msg="CreateContainer within sandbox \"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 00:50:12.606967 containerd[1718]: time="2026-03-04T00:50:12.606922552Z" level=info msg="CreateContainer within sandbox \"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6e2188eed1df1c0dd96ca8d826cf749a37cc7498dc3278949bfc054862803d7a\"" Mar 4 00:50:12.608831 containerd[1718]: time="2026-03-04T00:50:12.608798834Z" level=info msg="StartContainer for \"6e2188eed1df1c0dd96ca8d826cf749a37cc7498dc3278949bfc054862803d7a\"" Mar 4 00:50:12.655368 systemd[1]: Started cri-containerd-6e2188eed1df1c0dd96ca8d826cf749a37cc7498dc3278949bfc054862803d7a.scope - libcontainer container 6e2188eed1df1c0dd96ca8d826cf749a37cc7498dc3278949bfc054862803d7a. Mar 4 00:50:12.715918 containerd[1718]: time="2026-03-04T00:50:12.715863136Z" level=info msg="StartContainer for \"6e2188eed1df1c0dd96ca8d826cf749a37cc7498dc3278949bfc054862803d7a\" returns successfully" Mar 4 00:50:14.113439 containerd[1718]: time="2026-03-04T00:50:14.113153780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:14.117269 containerd[1718]: time="2026-03-04T00:50:14.117207944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 4 00:50:14.121629 containerd[1718]: time="2026-03-04T00:50:14.121283228Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:14.126684 containerd[1718]: time="2026-03-04T00:50:14.126649313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:14.127347 containerd[1718]: time="2026-03-04T00:50:14.127315994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.57100497s" Mar 4 00:50:14.127405 containerd[1718]: time="2026-03-04T00:50:14.127347714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 4 00:50:14.128846 containerd[1718]: time="2026-03-04T00:50:14.128729675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 00:50:14.136232 containerd[1718]: time="2026-03-04T00:50:14.135916722Z" level=info msg="CreateContainer within sandbox \"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 00:50:14.175178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627778520.mount: Deactivated successfully. Mar 4 00:50:14.196848 containerd[1718]: time="2026-03-04T00:50:14.196802580Z" level=info msg="CreateContainer within sandbox \"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8\"" Mar 4 00:50:14.197588 containerd[1718]: time="2026-03-04T00:50:14.197471740Z" level=info msg="StartContainer for \"0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8\"" Mar 4 00:50:14.228289 systemd[1]: run-containerd-runc-k8s.io-0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8-runc.QOIGaO.mount: Deactivated successfully. Mar 4 00:50:14.236083 systemd[1]: Started cri-containerd-0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8.scope - libcontainer container 0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8. Mar 4 00:50:14.269668 containerd[1718]: time="2026-03-04T00:50:14.269522689Z" level=info msg="StartContainer for \"0485abecd1af71fb9723395e8cfe35b2884c516b9dcfdeb9b0a8eecfabf467c8\" returns successfully" Mar 4 00:50:16.455080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843580786.mount: Deactivated successfully. Mar 4 00:50:16.513236 containerd[1718]: time="2026-03-04T00:50:16.513193056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:16.516830 containerd[1718]: time="2026-03-04T00:50:16.516796819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 4 00:50:16.520784 containerd[1718]: time="2026-03-04T00:50:16.520736623Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:16.527264 containerd[1718]: time="2026-03-04T00:50:16.527048749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.398286594s" Mar 4 00:50:16.527264 containerd[1718]: time="2026-03-04T00:50:16.527088909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 4 00:50:16.530479 containerd[1718]: time="2026-03-04T00:50:16.529420431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 00:50:16.533812 containerd[1718]: time="2026-03-04T00:50:16.533760355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:16.541528 containerd[1718]: time="2026-03-04T00:50:16.541391643Z" level=info msg="CreateContainer within sandbox \"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 00:50:16.585455 containerd[1718]: time="2026-03-04T00:50:16.585046884Z" level=info msg="CreateContainer within sandbox \"d4316eb948cc23c7e6df0328253b5659b418229fd302cd2216b369943a6925a8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"17b20d242a32303671247215bf8d25708440baaaece3fa1d19158c19484ff062\"" Mar 4 00:50:16.587780 containerd[1718]: time="2026-03-04T00:50:16.585787245Z" level=info msg="StartContainer for \"17b20d242a32303671247215bf8d25708440baaaece3fa1d19158c19484ff062\"" Mar 4 00:50:16.623756 systemd[1]: Started cri-containerd-17b20d242a32303671247215bf8d25708440baaaece3fa1d19158c19484ff062.scope - libcontainer container 17b20d242a32303671247215bf8d25708440baaaece3fa1d19158c19484ff062. Mar 4 00:50:16.659611 containerd[1718]: time="2026-03-04T00:50:16.659533116Z" level=info msg="StartContainer for \"17b20d242a32303671247215bf8d25708440baaaece3fa1d19158c19484ff062\" returns successfully" Mar 4 00:50:18.185029 containerd[1718]: time="2026-03-04T00:50:18.184985771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:18.188716 containerd[1718]: time="2026-03-04T00:50:18.188688495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 4 00:50:18.192587 containerd[1718]: time="2026-03-04T00:50:18.192535018Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:18.207143 containerd[1718]: time="2026-03-04T00:50:18.206250512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:50:18.207143 containerd[1718]: time="2026-03-04T00:50:18.207021353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.677566162s" Mar 4 00:50:18.207143 containerd[1718]: time="2026-03-04T00:50:18.207054793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 4 00:50:18.219089 containerd[1718]: time="2026-03-04T00:50:18.219053364Z" level=info msg="CreateContainer within sandbox \"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 00:50:18.273783 containerd[1718]: time="2026-03-04T00:50:18.273740818Z" level=info msg="CreateContainer within sandbox \"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dd6e22f36b0512f7cc4775c90e2c2cff1a828163cd8918a4b11e865fc83d79e2\"" Mar 4 00:50:18.274787 containerd[1718]: time="2026-03-04T00:50:18.274696539Z" level=info msg="StartContainer for \"dd6e22f36b0512f7cc4775c90e2c2cff1a828163cd8918a4b11e865fc83d79e2\"" Mar 4 00:50:18.312746 systemd[1]: Started cri-containerd-dd6e22f36b0512f7cc4775c90e2c2cff1a828163cd8918a4b11e865fc83d79e2.scope - libcontainer container dd6e22f36b0512f7cc4775c90e2c2cff1a828163cd8918a4b11e865fc83d79e2. Mar 4 00:50:18.347848 containerd[1718]: time="2026-03-04T00:50:18.347802131Z" level=info msg="StartContainer for \"dd6e22f36b0512f7cc4775c90e2c2cff1a828163cd8918a4b11e865fc83d79e2\" returns successfully" Mar 4 00:50:18.465062 kubelet[3200]: I0304 00:50:18.464678 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:50:18.494735 kubelet[3200]: I0304 00:50:18.494673 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7596fbcd85-zk7k9" podStartSLOduration=3.033623657 podStartE2EDuration="19.494533394s" podCreationTimestamp="2026-03-04 00:49:59 +0000 UTC" firstStartedPulling="2026-03-04 00:50:00.068040094 +0000 UTC m=+41.321105398" lastFinishedPulling="2026-03-04 00:50:16.528949791 +0000 UTC m=+57.782015135" observedRunningTime="2026-03-04 00:50:17.262155707 +0000 UTC m=+58.515221011" watchObservedRunningTime="2026-03-04 00:50:18.494533394 +0000 UTC m=+59.747598698" Mar 4 00:50:18.958941 containerd[1718]: time="2026-03-04T00:50:18.958643809Z" level=info msg="StopPodSandbox for \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\"" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:18.994 [WARNING][6006] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0", GenerateName:"calico-kube-controllers-56b5c6bcf-", Namespace:"calico-system", SelfLink:"", UID:"e6f4eb03-2699-45e9-b386-310dd3f95ccd", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56b5c6bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523", Pod:"calico-kube-controllers-56b5c6bcf-xvchl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a0ca90d0a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:18.994 [INFO][6006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:18.994 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" iface="eth0" netns="" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:18.994 [INFO][6006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:18.994 [INFO][6006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.019 [INFO][6013] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.019 [INFO][6013] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.019 [INFO][6013] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.028 [WARNING][6013] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.028 [INFO][6013] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.030 [INFO][6013] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.032903 containerd[1718]: 2026-03-04 00:50:19.031 [INFO][6006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.033347 containerd[1718]: time="2026-03-04T00:50:19.032943162Z" level=info msg="TearDown network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" successfully" Mar 4 00:50:19.033347 containerd[1718]: time="2026-03-04T00:50:19.032971402Z" level=info msg="StopPodSandbox for \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" returns successfully" Mar 4 00:50:19.033905 containerd[1718]: time="2026-03-04T00:50:19.033660923Z" level=info msg="RemovePodSandbox for \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\"" Mar 4 00:50:19.033905 containerd[1718]: time="2026-03-04T00:50:19.033693483Z" level=info msg="Forcibly stopping sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\"" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.070 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0", GenerateName:"calico-kube-controllers-56b5c6bcf-", Namespace:"calico-system", SelfLink:"", UID:"e6f4eb03-2699-45e9-b386-310dd3f95ccd", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56b5c6bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"0b35ba6ae6d77b1baabc745d3c088b6ac387ae3c38d80008935355f6c49d7523", Pod:"calico-kube-controllers-56b5c6bcf-xvchl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a0ca90d0a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.070 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.070 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" iface="eth0" netns="" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.070 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.070 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.087 [INFO][6034] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.088 [INFO][6034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.088 [INFO][6034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.096 [WARNING][6034] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.096 [INFO][6034] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" HandleID="k8s-pod-network.bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--kube--controllers--56b5c6bcf--xvchl-eth0" Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.097 [INFO][6034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.100930 containerd[1718]: 2026-03-04 00:50:19.099 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15" Mar 4 00:50:19.101339 containerd[1718]: time="2026-03-04T00:50:19.100983229Z" level=info msg="TearDown network for sandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" successfully" Mar 4 00:50:19.114556 containerd[1718]: time="2026-03-04T00:50:19.114510282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.114699 containerd[1718]: time="2026-03-04T00:50:19.114598762Z" level=info msg="RemovePodSandbox \"bcb0a6367308d5587ace7d8b94c6867c9a1ec8ffc37f379c803d9200740afb15\" returns successfully" Mar 4 00:50:19.115438 containerd[1718]: time="2026-03-04T00:50:19.115217363Z" level=info msg="StopPodSandbox for \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\"" Mar 4 00:50:19.174286 kubelet[3200]: I0304 00:50:19.174203 3200 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 00:50:19.179578 kubelet[3200]: I0304 00:50:19.179022 3200 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.148 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a9862f74-c05c-453b-b7c5-c17945e70f61", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804", Pod:"goldmane-cccfbd5cf-2lgfs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cc7f6b204b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.150 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.150 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" iface="eth0" netns="" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.150 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.150 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.170 [INFO][6055] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.170 [INFO][6055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.170 [INFO][6055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.185 [WARNING][6055] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.185 [INFO][6055] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.187 [INFO][6055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.191940 containerd[1718]: 2026-03-04 00:50:19.189 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.192845 containerd[1718]: time="2026-03-04T00:50:19.192214158Z" level=info msg="TearDown network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" successfully" Mar 4 00:50:19.192845 containerd[1718]: time="2026-03-04T00:50:19.192238798Z" level=info msg="StopPodSandbox for \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" returns successfully" Mar 4 00:50:19.193137 containerd[1718]: time="2026-03-04T00:50:19.193107919Z" level=info msg="RemovePodSandbox for \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\"" Mar 4 00:50:19.193188 containerd[1718]: time="2026-03-04T00:50:19.193157599Z" level=info msg="Forcibly stopping sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\"" Mar 4 00:50:19.276544 kubelet[3200]: I0304 00:50:19.276410 3200 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c52z8" podStartSLOduration=31.923508071 podStartE2EDuration="40.276393201s" podCreationTimestamp="2026-03-04 00:49:39 +0000 UTC" firstStartedPulling="2026-03-04 00:50:09.855299304 +0000 UTC m=+51.108364608" lastFinishedPulling="2026-03-04 00:50:18.208184434 +0000 UTC m=+59.461249738" observedRunningTime="2026-03-04 00:50:19.274808959 +0000 UTC m=+60.527874263" watchObservedRunningTime="2026-03-04 00:50:19.276393201 +0000 UTC m=+60.529458505" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.232 [WARNING][6069] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"a9862f74-c05c-453b-b7c5-c17945e70f61", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"7152c4a028723b284a9c6626adc49f59b64f2dc1e686ed96f53868e4288ce804", Pod:"goldmane-cccfbd5cf-2lgfs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cc7f6b204b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.232 [INFO][6069] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.232 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" iface="eth0" netns="" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.232 [INFO][6069] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.232 [INFO][6069] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.258 [INFO][6076] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.258 [INFO][6076] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.258 [INFO][6076] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.274 [WARNING][6076] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.274 [INFO][6076] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" HandleID="k8s-pod-network.d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Workload="ci--4081.3.6--n--32bda88c6e-k8s-goldmane--cccfbd5cf--2lgfs-eth0" Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.278 [INFO][6076] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.282137 containerd[1718]: 2026-03-04 00:50:19.279 [INFO][6069] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8" Mar 4 00:50:19.282531 containerd[1718]: time="2026-03-04T00:50:19.282179846Z" level=info msg="TearDown network for sandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" successfully" Mar 4 00:50:19.291695 containerd[1718]: time="2026-03-04T00:50:19.291637455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.291851 containerd[1718]: time="2026-03-04T00:50:19.291714216Z" level=info msg="RemovePodSandbox \"d291a25a45d28a761ee97a64e3e045f7bd72d39373204e0e266ea777a9baf5e8\" returns successfully" Mar 4 00:50:19.292810 containerd[1718]: time="2026-03-04T00:50:19.292533856Z" level=info msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.337 [WARNING][6091] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7af50d6-af42-44be-9485-418ddf8e697a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470", Pod:"csi-node-driver-c52z8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb0acc22fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.337 [INFO][6091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.337 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" iface="eth0" netns="" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.337 [INFO][6091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.337 [INFO][6091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.355 [INFO][6098] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.355 [INFO][6098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.355 [INFO][6098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.363 [WARNING][6098] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.363 [INFO][6098] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.365 [INFO][6098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.368231 containerd[1718]: 2026-03-04 00:50:19.366 [INFO][6091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.369165 containerd[1718]: time="2026-03-04T00:50:19.368644811Z" level=info msg="TearDown network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" successfully" Mar 4 00:50:19.369165 containerd[1718]: time="2026-03-04T00:50:19.368679611Z" level=info msg="StopPodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" returns successfully" Mar 4 00:50:19.369165 containerd[1718]: time="2026-03-04T00:50:19.369122811Z" level=info msg="RemovePodSandbox for \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" Mar 4 00:50:19.369165 containerd[1718]: time="2026-03-04T00:50:19.369152171Z" level=info msg="Forcibly stopping sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\"" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.400 [WARNING][6113] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7af50d6-af42-44be-9485-418ddf8e697a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"697c225e596641a85eb3520700cbe8067581e7bba17a7e071af4044eb9c56470", Pod:"csi-node-driver-c52z8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb0acc22fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.401 [INFO][6113] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.401 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" iface="eth0" netns="" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.401 [INFO][6113] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.402 [INFO][6113] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.420 [INFO][6120] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.420 [INFO][6120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.421 [INFO][6120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.429 [WARNING][6120] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.429 [INFO][6120] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" HandleID="k8s-pod-network.928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Workload="ci--4081.3.6--n--32bda88c6e-k8s-csi--node--driver--c52z8-eth0" Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.430 [INFO][6120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.434591 containerd[1718]: 2026-03-04 00:50:19.432 [INFO][6113] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3" Mar 4 00:50:19.434591 containerd[1718]: time="2026-03-04T00:50:19.434126835Z" level=info msg="TearDown network for sandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" successfully" Mar 4 00:50:19.443636 containerd[1718]: time="2026-03-04T00:50:19.443597684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.443898 containerd[1718]: time="2026-03-04T00:50:19.443809845Z" level=info msg="RemovePodSandbox \"928aa08c2fada21206d31d4db9ee14e9999830a2b71ca2c9ffe8e9719516f9b3\" returns successfully" Mar 4 00:50:19.444311 containerd[1718]: time="2026-03-04T00:50:19.444289485Z" level=info msg="StopPodSandbox for \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\"" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.475 [WARNING][6134] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"31c90154-2140-4ee5-b886-dc4e9943430b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c", Pod:"calico-apiserver-58c464678b-tr8bm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f86380b8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.475 [INFO][6134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.475 [INFO][6134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" iface="eth0" netns="" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.475 [INFO][6134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.475 [INFO][6134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.492 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.492 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.492 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.501 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.501 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.502 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.505464 containerd[1718]: 2026-03-04 00:50:19.503 [INFO][6134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.505950 containerd[1718]: time="2026-03-04T00:50:19.505635865Z" level=info msg="TearDown network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" successfully" Mar 4 00:50:19.505950 containerd[1718]: time="2026-03-04T00:50:19.505666145Z" level=info msg="StopPodSandbox for \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" returns successfully" Mar 4 00:50:19.506274 containerd[1718]: time="2026-03-04T00:50:19.506218066Z" level=info msg="RemovePodSandbox for \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\"" Mar 4 00:50:19.506332 containerd[1718]: time="2026-03-04T00:50:19.506280386Z" level=info msg="Forcibly stopping sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\"" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.538 [WARNING][6155] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"31c90154-2140-4ee5-b886-dc4e9943430b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"93c8e76ab14b315b7c2bfec0bf684073cd78ac284af5fc6b33aa302be3cd705c", Pod:"calico-apiserver-58c464678b-tr8bm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali84f86380b8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.539 [INFO][6155] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.539 [INFO][6155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" iface="eth0" netns="" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.539 [INFO][6155] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.539 [INFO][6155] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.559 [INFO][6163] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.560 [INFO][6163] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.560 [INFO][6163] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.569 [WARNING][6163] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.569 [INFO][6163] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" HandleID="k8s-pod-network.ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--tr8bm-eth0" Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.570 [INFO][6163] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.574196 containerd[1718]: 2026-03-04 00:50:19.572 [INFO][6155] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389" Mar 4 00:50:19.574196 containerd[1718]: time="2026-03-04T00:50:19.573780932Z" level=info msg="TearDown network for sandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" successfully" Mar 4 00:50:19.585287 containerd[1718]: time="2026-03-04T00:50:19.585121423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.585287 containerd[1718]: time="2026-03-04T00:50:19.585191983Z" level=info msg="RemovePodSandbox \"ef227c499c91c90ac109d827d430a7725091109473966a762ab0fd1553199389\" returns successfully" Mar 4 00:50:19.585852 containerd[1718]: time="2026-03-04T00:50:19.585830184Z" level=info msg="StopPodSandbox for \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\"" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.618 [WARNING][6177] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.618 [INFO][6177] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.618 [INFO][6177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" iface="eth0" netns="" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.618 [INFO][6177] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.618 [INFO][6177] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.638 [INFO][6184] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.638 [INFO][6184] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.638 [INFO][6184] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.646 [WARNING][6184] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.646 [INFO][6184] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.647 [INFO][6184] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.650438 containerd[1718]: 2026-03-04 00:50:19.649 [INFO][6177] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.650809 containerd[1718]: time="2026-03-04T00:50:19.650486327Z" level=info msg="TearDown network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" successfully" Mar 4 00:50:19.650809 containerd[1718]: time="2026-03-04T00:50:19.650510487Z" level=info msg="StopPodSandbox for \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" returns successfully" Mar 4 00:50:19.651009 containerd[1718]: time="2026-03-04T00:50:19.650981128Z" level=info msg="RemovePodSandbox for \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\"" Mar 4 00:50:19.651043 containerd[1718]: time="2026-03-04T00:50:19.651030568Z" level=info msg="Forcibly stopping sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\"" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.682 [WARNING][6198] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" WorkloadEndpoint="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.682 [INFO][6198] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.682 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" iface="eth0" netns="" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.682 [INFO][6198] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.682 [INFO][6198] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.701 [INFO][6205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.701 [INFO][6205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.701 [INFO][6205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.710 [WARNING][6205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.710 [INFO][6205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" HandleID="k8s-pod-network.6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Workload="ci--4081.3.6--n--32bda88c6e-k8s-whisker--6467f5f748--24g99-eth0" Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.711 [INFO][6205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.716269 containerd[1718]: 2026-03-04 00:50:19.714 [INFO][6198] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701" Mar 4 00:50:19.716632 containerd[1718]: time="2026-03-04T00:50:19.716318272Z" level=info msg="TearDown network for sandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" successfully" Mar 4 00:50:19.727881 containerd[1718]: time="2026-03-04T00:50:19.727837563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.727982 containerd[1718]: time="2026-03-04T00:50:19.727920803Z" level=info msg="RemovePodSandbox \"6378caf856dbc4e39a62f0d1404228bdf2d86ccff415dfffd6aa39dcaf763701\" returns successfully" Mar 4 00:50:19.728608 containerd[1718]: time="2026-03-04T00:50:19.728584404Z" level=info msg="StopPodSandbox for \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\"" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.760 [WARNING][6219] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"51231700-2a49-4663-85fb-fd5115fc08f4", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365", Pod:"coredns-66bc5c9577-mxljm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicecd56fff34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.760 [INFO][6219] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.760 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" iface="eth0" netns="" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.760 [INFO][6219] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.760 [INFO][6219] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.787 [INFO][6226] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.787 [INFO][6226] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.787 [INFO][6226] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.797 [WARNING][6226] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.797 [INFO][6226] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.799 [INFO][6226] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.804115 containerd[1718]: 2026-03-04 00:50:19.802 [INFO][6219] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.805272 containerd[1718]: time="2026-03-04T00:50:19.804500318Z" level=info msg="TearDown network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" successfully" Mar 4 00:50:19.805272 containerd[1718]: time="2026-03-04T00:50:19.804527758Z" level=info msg="StopPodSandbox for \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" returns successfully" Mar 4 00:50:19.805856 containerd[1718]: time="2026-03-04T00:50:19.805573279Z" level=info msg="RemovePodSandbox for \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\"" Mar 4 00:50:19.805856 containerd[1718]: time="2026-03-04T00:50:19.805607519Z" level=info msg="Forcibly stopping sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\"" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.843 [WARNING][6247] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"51231700-2a49-4663-85fb-fd5115fc08f4", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"11ddcc14fae6e04923e0311474efb9cba34b5df56b74585a681b5d0a180a3365", Pod:"coredns-66bc5c9577-mxljm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicecd56fff34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.844 [INFO][6247] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.844 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" iface="eth0" netns="" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.844 [INFO][6247] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.844 [INFO][6247] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.861 [INFO][6254] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.861 [INFO][6254] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.861 [INFO][6254] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.870 [WARNING][6254] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.870 [INFO][6254] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" HandleID="k8s-pod-network.2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--mxljm-eth0" Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.871 [INFO][6254] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.874440 containerd[1718]: 2026-03-04 00:50:19.872 [INFO][6247] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6" Mar 4 00:50:19.875941 containerd[1718]: time="2026-03-04T00:50:19.874748067Z" level=info msg="TearDown network for sandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" successfully" Mar 4 00:50:19.885089 containerd[1718]: time="2026-03-04T00:50:19.885039517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:19.885544 containerd[1718]: time="2026-03-04T00:50:19.885250677Z" level=info msg="RemovePodSandbox \"2713271c69fbedf42c4640b39aecb5e0a20c0dbd5dbf1391fa145b0e541fb4c6\" returns successfully" Mar 4 00:50:19.885712 containerd[1718]: time="2026-03-04T00:50:19.885687238Z" level=info msg="StopPodSandbox for \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\"" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.918 [WARNING][6268] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"0012b3a2-8c12-4b79-8f51-74e600877b09", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52", Pod:"calico-apiserver-58c464678b-zm8w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif8699495486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.918 [INFO][6268] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.918 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" iface="eth0" netns="" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.918 [INFO][6268] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.918 [INFO][6268] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.941 [INFO][6275] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.941 [INFO][6275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.941 [INFO][6275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.952 [WARNING][6275] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.952 [INFO][6275] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.954 [INFO][6275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:19.957480 containerd[1718]: 2026-03-04 00:50:19.956 [INFO][6268] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:19.957480 containerd[1718]: time="2026-03-04T00:50:19.957428708Z" level=info msg="TearDown network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" successfully" Mar 4 00:50:19.957480 containerd[1718]: time="2026-03-04T00:50:19.957452948Z" level=info msg="StopPodSandbox for \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" returns successfully" Mar 4 00:50:19.957948 containerd[1718]: time="2026-03-04T00:50:19.957863028Z" level=info msg="RemovePodSandbox for \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\"" Mar 4 00:50:19.957948 containerd[1718]: time="2026-03-04T00:50:19.957887188Z" level=info msg="Forcibly stopping sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\"" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:19.989 [WARNING][6290] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0", GenerateName:"calico-apiserver-58c464678b-", Namespace:"calico-system", SelfLink:"", UID:"0012b3a2-8c12-4b79-8f51-74e600877b09", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c464678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"1cb6a775decaea5754ba214a7170c01ea3130881fb768901ea03a13b9d0fea52", Pod:"calico-apiserver-58c464678b-zm8w9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif8699495486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:19.989 [INFO][6290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:19.989 [INFO][6290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" iface="eth0" netns="" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:19.989 [INFO][6290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:19.989 [INFO][6290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.008 [INFO][6297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.008 [INFO][6297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.008 [INFO][6297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.017 [WARNING][6297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.017 [INFO][6297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" HandleID="k8s-pod-network.293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Workload="ci--4081.3.6--n--32bda88c6e-k8s-calico--apiserver--58c464678b--zm8w9-eth0" Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.019 [INFO][6297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:20.022012 containerd[1718]: 2026-03-04 00:50:20.020 [INFO][6290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19" Mar 4 00:50:20.022399 containerd[1718]: time="2026-03-04T00:50:20.022063571Z" level=info msg="TearDown network for sandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" successfully" Mar 4 00:50:20.031239 containerd[1718]: time="2026-03-04T00:50:20.031156020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:20.031331 containerd[1718]: time="2026-03-04T00:50:20.031295580Z" level=info msg="RemovePodSandbox \"293a7e3474aa94f74a0cfcdc223516934586a2dd41f58e942c91b9025ce25f19\" returns successfully" Mar 4 00:50:20.031857 containerd[1718]: time="2026-03-04T00:50:20.031834781Z" level=info msg="StopPodSandbox for \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\"" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.065 [WARNING][6311] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"42250f5b-0fe9-4ffb-8a66-17380f81c557", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95", Pod:"coredns-66bc5c9577-6znkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b2fab2036a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.065 [INFO][6311] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.065 [INFO][6311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" iface="eth0" netns="" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.065 [INFO][6311] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.065 [INFO][6311] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.082 [INFO][6318] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.083 [INFO][6318] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.083 [INFO][6318] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.091 [WARNING][6318] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.091 [INFO][6318] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.092 [INFO][6318] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:20.095927 containerd[1718]: 2026-03-04 00:50:20.094 [INFO][6311] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.096385 containerd[1718]: time="2026-03-04T00:50:20.095968604Z" level=info msg="TearDown network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" successfully" Mar 4 00:50:20.096385 containerd[1718]: time="2026-03-04T00:50:20.095993124Z" level=info msg="StopPodSandbox for \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" returns successfully" Mar 4 00:50:20.096428 containerd[1718]: time="2026-03-04T00:50:20.096396004Z" level=info msg="RemovePodSandbox for \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\"" Mar 4 00:50:20.096428 containerd[1718]: time="2026-03-04T00:50:20.096421684Z" level=info msg="Forcibly stopping sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\"" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.133 [WARNING][6332] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"42250f5b-0fe9-4ffb-8a66-17380f81c557", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 49, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32bda88c6e", ContainerID:"70f35af60534bd1a1b21e0f0f0052dd7d4707bd2de777359657447a506414b95", Pod:"coredns-66bc5c9577-6znkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b2fab2036a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.133 [INFO][6332] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.133 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" iface="eth0" netns="" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.133 [INFO][6332] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.133 [INFO][6332] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.152 [INFO][6339] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.152 [INFO][6339] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.152 [INFO][6339] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.161 [WARNING][6339] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.161 [INFO][6339] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" HandleID="k8s-pod-network.406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Workload="ci--4081.3.6--n--32bda88c6e-k8s-coredns--66bc5c9577--6znkz-eth0" Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.162 [INFO][6339] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:50:20.166136 containerd[1718]: 2026-03-04 00:50:20.164 [INFO][6332] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0" Mar 4 00:50:20.166136 containerd[1718]: time="2026-03-04T00:50:20.166110472Z" level=info msg="TearDown network for sandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" successfully" Mar 4 00:50:20.176047 containerd[1718]: time="2026-03-04T00:50:20.175992882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:50:20.176201 containerd[1718]: time="2026-03-04T00:50:20.176066842Z" level=info msg="RemovePodSandbox \"406d779d89c16eb5fec6adcb869e93338c54c6696e49eed714d63825909c1af0\" returns successfully" Mar 4 00:50:43.653001 kubelet[3200]: I0304 00:50:43.652831 3200 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:51:02.086900 systemd[1]: Started sshd@7-10.200.20.22:22-10.200.16.10:52290.service - OpenSSH per-connection server daemon (10.200.16.10:52290). Mar 4 00:51:02.575619 sshd[6486]: Accepted publickey for core from 10.200.16.10 port 52290 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:02.577304 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:02.581858 systemd-logind[1694]: New session 10 of user core. Mar 4 00:51:02.589712 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 00:51:03.024362 sshd[6486]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:03.028872 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Mar 4 00:51:03.029162 systemd[1]: sshd@7-10.200.20.22:22-10.200.16.10:52290.service: Deactivated successfully. Mar 4 00:51:03.030879 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 00:51:03.033481 systemd-logind[1694]: Removed session 10. Mar 4 00:51:08.119890 systemd[1]: Started sshd@8-10.200.20.22:22-10.200.16.10:52300.service - OpenSSH per-connection server daemon (10.200.16.10:52300). Mar 4 00:51:08.611588 sshd[6523]: Accepted publickey for core from 10.200.16.10 port 52300 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:08.612519 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:08.616905 systemd-logind[1694]: New session 11 of user core. Mar 4 00:51:08.626734 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 00:51:09.056837 sshd[6523]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:09.061216 systemd[1]: sshd@8-10.200.20.22:22-10.200.16.10:52300.service: Deactivated successfully. Mar 4 00:51:09.063361 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 00:51:09.064280 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Mar 4 00:51:09.065354 systemd-logind[1694]: Removed session 11. Mar 4 00:51:14.153817 systemd[1]: Started sshd@9-10.200.20.22:22-10.200.16.10:38486.service - OpenSSH per-connection server daemon (10.200.16.10:38486). Mar 4 00:51:14.640592 sshd[6601]: Accepted publickey for core from 10.200.16.10 port 38486 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:14.641788 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:14.646063 systemd-logind[1694]: New session 12 of user core. Mar 4 00:51:14.649758 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 00:51:15.056866 sshd[6601]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:15.060845 systemd[1]: sshd@9-10.200.20.22:22-10.200.16.10:38486.service: Deactivated successfully. Mar 4 00:51:15.063321 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 00:51:15.064330 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Mar 4 00:51:15.065454 systemd-logind[1694]: Removed session 12. Mar 4 00:51:20.149957 systemd[1]: Started sshd@10-10.200.20.22:22-10.200.16.10:43186.service - OpenSSH per-connection server daemon (10.200.16.10:43186). Mar 4 00:51:20.637583 sshd[6616]: Accepted publickey for core from 10.200.16.10 port 43186 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:20.638423 sshd[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:20.642104 systemd-logind[1694]: New session 13 of user core. Mar 4 00:51:20.645696 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 00:51:21.064149 sshd[6616]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:21.067474 systemd[1]: sshd@10-10.200.20.22:22-10.200.16.10:43186.service: Deactivated successfully. Mar 4 00:51:21.069381 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 00:51:21.070232 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Mar 4 00:51:21.071547 systemd-logind[1694]: Removed session 13. Mar 4 00:51:26.157789 systemd[1]: Started sshd@11-10.200.20.22:22-10.200.16.10:43190.service - OpenSSH per-connection server daemon (10.200.16.10:43190). Mar 4 00:51:26.643544 sshd[6648]: Accepted publickey for core from 10.200.16.10 port 43190 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:26.645342 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:26.649170 systemd-logind[1694]: New session 14 of user core. Mar 4 00:51:26.656706 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 00:51:27.064800 sshd[6648]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:27.068550 systemd[1]: sshd@11-10.200.20.22:22-10.200.16.10:43190.service: Deactivated successfully. Mar 4 00:51:27.070652 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 00:51:27.072454 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Mar 4 00:51:27.073854 systemd-logind[1694]: Removed session 14. Mar 4 00:51:27.155792 systemd[1]: Started sshd@12-10.200.20.22:22-10.200.16.10:43206.service - OpenSSH per-connection server daemon (10.200.16.10:43206). Mar 4 00:51:27.643633 sshd[6675]: Accepted publickey for core from 10.200.16.10 port 43206 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:27.645445 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:27.649462 systemd-logind[1694]: New session 15 of user core. Mar 4 00:51:27.656723 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 00:51:28.092457 sshd[6675]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:28.097016 systemd[1]: sshd@12-10.200.20.22:22-10.200.16.10:43206.service: Deactivated successfully. Mar 4 00:51:28.099481 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 00:51:28.100391 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Mar 4 00:51:28.101721 systemd-logind[1694]: Removed session 15. Mar 4 00:51:28.187164 systemd[1]: Started sshd@13-10.200.20.22:22-10.200.16.10:43214.service - OpenSSH per-connection server daemon (10.200.16.10:43214). Mar 4 00:51:28.670844 sshd[6686]: Accepted publickey for core from 10.200.16.10 port 43214 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:28.672036 sshd[6686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:28.676890 systemd-logind[1694]: New session 16 of user core. Mar 4 00:51:28.681712 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 00:51:29.090220 sshd[6686]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:29.093859 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Mar 4 00:51:29.094510 systemd[1]: sshd@13-10.200.20.22:22-10.200.16.10:43214.service: Deactivated successfully. Mar 4 00:51:29.096750 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 00:51:29.097915 systemd-logind[1694]: Removed session 16. Mar 4 00:51:34.178638 systemd[1]: Started sshd@14-10.200.20.22:22-10.200.16.10:59592.service - OpenSSH per-connection server daemon (10.200.16.10:59592). Mar 4 00:51:34.672088 sshd[6727]: Accepted publickey for core from 10.200.16.10 port 59592 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:34.673367 sshd[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:34.677778 systemd-logind[1694]: New session 17 of user core. Mar 4 00:51:34.682759 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 00:51:35.095167 sshd[6727]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:35.098599 systemd[1]: sshd@14-10.200.20.22:22-10.200.16.10:59592.service: Deactivated successfully. Mar 4 00:51:35.102082 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 00:51:35.103182 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Mar 4 00:51:35.104170 systemd-logind[1694]: Removed session 17. Mar 4 00:51:35.184587 systemd[1]: Started sshd@15-10.200.20.22:22-10.200.16.10:59606.service - OpenSSH per-connection server daemon (10.200.16.10:59606). Mar 4 00:51:35.679834 sshd[6739]: Accepted publickey for core from 10.200.16.10 port 59606 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:35.681391 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:35.686307 systemd-logind[1694]: New session 18 of user core. Mar 4 00:51:35.690700 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 00:51:36.214865 sshd[6739]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:36.218927 systemd[1]: sshd@15-10.200.20.22:22-10.200.16.10:59606.service: Deactivated successfully. Mar 4 00:51:36.220830 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 00:51:36.221465 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Mar 4 00:51:36.222473 systemd-logind[1694]: Removed session 18. Mar 4 00:51:36.302204 systemd[1]: Started sshd@16-10.200.20.22:22-10.200.16.10:59622.service - OpenSSH per-connection server daemon (10.200.16.10:59622). Mar 4 00:51:36.791503 sshd[6762]: Accepted publickey for core from 10.200.16.10 port 59622 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:36.792989 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:36.799517 systemd-logind[1694]: New session 19 of user core. Mar 4 00:51:36.803717 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 00:51:37.852921 sshd[6762]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:37.855814 systemd[1]: sshd@16-10.200.20.22:22-10.200.16.10:59622.service: Deactivated successfully. Mar 4 00:51:37.858138 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 00:51:37.860065 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Mar 4 00:51:37.861285 systemd-logind[1694]: Removed session 19. Mar 4 00:51:37.948567 systemd[1]: Started sshd@17-10.200.20.22:22-10.200.16.10:59632.service - OpenSSH per-connection server daemon (10.200.16.10:59632). Mar 4 00:51:38.434622 sshd[6830]: Accepted publickey for core from 10.200.16.10 port 59632 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:38.436070 sshd[6830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:38.441412 systemd-logind[1694]: New session 20 of user core. Mar 4 00:51:38.446765 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 00:51:38.972373 sshd[6830]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:38.976344 systemd[1]: sshd@17-10.200.20.22:22-10.200.16.10:59632.service: Deactivated successfully. Mar 4 00:51:38.979173 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 00:51:38.980064 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Mar 4 00:51:38.981011 systemd-logind[1694]: Removed session 20. Mar 4 00:51:39.063817 systemd[1]: Started sshd@18-10.200.20.22:22-10.200.16.10:59648.service - OpenSSH per-connection server daemon (10.200.16.10:59648). Mar 4 00:51:39.549476 sshd[6864]: Accepted publickey for core from 10.200.16.10 port 59648 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:39.550807 sshd[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:39.554825 systemd-logind[1694]: New session 21 of user core. Mar 4 00:51:39.557751 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 00:51:39.951768 sshd[6864]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:39.956077 systemd[1]: sshd@18-10.200.20.22:22-10.200.16.10:59648.service: Deactivated successfully. Mar 4 00:51:39.958349 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 00:51:39.961153 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Mar 4 00:51:39.962664 systemd-logind[1694]: Removed session 21. Mar 4 00:51:45.045945 systemd[1]: Started sshd@19-10.200.20.22:22-10.200.16.10:50348.service - OpenSSH per-connection server daemon (10.200.16.10:50348). Mar 4 00:51:45.529106 sshd[6901]: Accepted publickey for core from 10.200.16.10 port 50348 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:45.530474 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:45.534188 systemd-logind[1694]: New session 22 of user core. Mar 4 00:51:45.538799 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 00:51:45.947375 sshd[6901]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:45.952702 systemd[1]: sshd@19-10.200.20.22:22-10.200.16.10:50348.service: Deactivated successfully. Mar 4 00:51:45.955172 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 00:51:45.956753 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Mar 4 00:51:45.957707 systemd-logind[1694]: Removed session 22. Mar 4 00:51:51.044948 systemd[1]: Started sshd@20-10.200.20.22:22-10.200.16.10:36994.service - OpenSSH per-connection server daemon (10.200.16.10:36994). Mar 4 00:51:51.537814 sshd[6913]: Accepted publickey for core from 10.200.16.10 port 36994 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:51.538674 sshd[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:51.542780 systemd-logind[1694]: New session 23 of user core. Mar 4 00:51:51.548714 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 00:51:51.939287 sshd[6913]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:51.945058 systemd[1]: sshd@20-10.200.20.22:22-10.200.16.10:36994.service: Deactivated successfully. Mar 4 00:51:51.949702 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 00:51:51.951480 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Mar 4 00:51:51.952943 systemd-logind[1694]: Removed session 23. Mar 4 00:51:57.034852 systemd[1]: Started sshd@21-10.200.20.22:22-10.200.16.10:37010.service - OpenSSH per-connection server daemon (10.200.16.10:37010). Mar 4 00:51:57.523131 sshd[6928]: Accepted publickey for core from 10.200.16.10 port 37010 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:51:57.524787 sshd[6928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:51:57.529901 systemd-logind[1694]: New session 24 of user core. Mar 4 00:51:57.533076 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 00:51:57.930908 sshd[6928]: pam_unix(sshd:session): session closed for user core Mar 4 00:51:57.935531 systemd[1]: sshd@21-10.200.20.22:22-10.200.16.10:37010.service: Deactivated successfully. Mar 4 00:51:57.938044 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 00:51:57.939209 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Mar 4 00:51:57.940637 systemd-logind[1694]: Removed session 24. Mar 4 00:52:03.019462 systemd[1]: Started sshd@22-10.200.20.22:22-10.200.16.10:40798.service - OpenSSH per-connection server daemon (10.200.16.10:40798). Mar 4 00:52:03.083735 systemd[1]: run-containerd-runc-k8s.io-00320d363fc65163ef297a68a9b397fa7ff4154814e378b0a029c1c62fdb79c1-runc.SUpFpi.mount: Deactivated successfully. Mar 4 00:52:03.515267 sshd[6941]: Accepted publickey for core from 10.200.16.10 port 40798 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:52:03.539045 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:52:03.543516 systemd-logind[1694]: New session 25 of user core. Mar 4 00:52:03.550725 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 00:52:03.927463 sshd[6941]: pam_unix(sshd:session): session closed for user core Mar 4 00:52:03.932825 systemd[1]: sshd@22-10.200.20.22:22-10.200.16.10:40798.service: Deactivated successfully. Mar 4 00:52:03.937002 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 00:52:03.937935 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Mar 4 00:52:03.939262 systemd-logind[1694]: Removed session 25. Mar 4 00:52:09.015454 systemd[1]: Started sshd@23-10.200.20.22:22-10.200.16.10:40800.service - OpenSSH per-connection server daemon (10.200.16.10:40800). Mar 4 00:52:09.510417 sshd[6993]: Accepted publickey for core from 10.200.16.10 port 40800 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:52:09.511787 sshd[6993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:52:09.517146 systemd-logind[1694]: New session 26 of user core. Mar 4 00:52:09.519687 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 00:52:09.922080 sshd[6993]: pam_unix(sshd:session): session closed for user core Mar 4 00:52:09.926344 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Mar 4 00:52:09.926709 systemd[1]: sshd@23-10.200.20.22:22-10.200.16.10:40800.service: Deactivated successfully. Mar 4 00:52:09.929276 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 00:52:09.930337 systemd-logind[1694]: Removed session 26.