Mar 11 01:23:01.159070 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 11 01:23:01.159091 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 10 23:05:53 -00 2026 Mar 11 01:23:01.159099 kernel: KASLR enabled Mar 11 01:23:01.159105 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 11 01:23:01.159112 kernel: printk: bootconsole [pl11] enabled Mar 11 01:23:01.159118 kernel: efi: EFI v2.7 by EDK II Mar 11 01:23:01.159125 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 11 01:23:01.159131 kernel: random: crng init done Mar 11 01:23:01.159137 kernel: ACPI: Early table checksum verification disabled Mar 11 01:23:01.159143 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 11 01:23:01.159149 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159155 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159163 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 11 01:23:01.159169 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159176 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159183 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159189 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159197 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159204 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159210 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 11 01:23:01.159216 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159223 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 11 01:23:01.159229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 11 01:23:01.159236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 11 01:23:01.159242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 11 01:23:01.159248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 11 01:23:01.159255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 11 01:23:01.159261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 11 01:23:01.159269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 11 01:23:01.159276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 11 01:23:01.159282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 11 01:23:01.159288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 11 01:23:01.159295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 11 01:23:01.159301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 11 01:23:01.159307 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 11 01:23:01.161338 kernel: Zone ranges: Mar 11 01:23:01.161366 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 11 01:23:01.161374 kernel: DMA32 empty Mar 11 01:23:01.161381 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:23:01.161388 kernel: Movable zone start for each node Mar 11 01:23:01.161402 kernel: Early memory node ranges Mar 11 01:23:01.161409 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 11 01:23:01.161416 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 11 01:23:01.161423 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 11 01:23:01.161430 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 11 01:23:01.161438 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 11 01:23:01.161445 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 11 01:23:01.161452 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:23:01.161460 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 11 01:23:01.161467 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 11 01:23:01.161473 kernel: psci: probing for conduit method from ACPI. Mar 11 01:23:01.161480 kernel: psci: PSCIv1.1 detected in firmware. Mar 11 01:23:01.161487 kernel: psci: Using standard PSCI v0.2 function IDs Mar 11 01:23:01.161494 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 11 01:23:01.161500 kernel: psci: SMC Calling Convention v1.4 Mar 11 01:23:01.161507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 11 01:23:01.161514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 11 01:23:01.161523 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 11 01:23:01.161529 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 11 01:23:01.161537 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 11 01:23:01.161543 kernel: Detected PIPT I-cache on CPU0 Mar 11 01:23:01.161550 kernel: CPU features: detected: GIC system register CPU interface Mar 11 01:23:01.161557 kernel: CPU features: detected: Hardware dirty bit management Mar 11 01:23:01.161564 kernel: CPU features: detected: Spectre-BHB Mar 11 01:23:01.161571 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 11 01:23:01.161577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 11 01:23:01.161584 kernel: CPU features: detected: ARM erratum 1418040 Mar 11 01:23:01.161591 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 11 01:23:01.161600 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 11 01:23:01.161607 kernel: alternatives: applying boot alternatives Mar 11 01:23:01.161615 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:23:01.161622 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 01:23:01.161629 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 01:23:01.161636 kernel: Fallback order for Node 0: 0 Mar 11 01:23:01.161643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 11 01:23:01.161650 kernel: Policy zone: Normal Mar 11 01:23:01.161656 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 01:23:01.161663 kernel: software IO TLB: area num 2. Mar 11 01:23:01.161670 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 11 01:23:01.161678 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 11 01:23:01.161686 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 11 01:23:01.161692 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 01:23:01.161700 kernel: rcu: RCU event tracing is enabled. Mar 11 01:23:01.161707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 11 01:23:01.161714 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 01:23:01.161721 kernel: Tracing variant of Tasks RCU enabled. Mar 11 01:23:01.161728 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 01:23:01.161735 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 11 01:23:01.161741 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 11 01:23:01.161748 kernel: GICv3: 960 SPIs implemented Mar 11 01:23:01.161757 kernel: GICv3: 0 Extended SPIs implemented Mar 11 01:23:01.161763 kernel: Root IRQ handler: gic_handle_irq Mar 11 01:23:01.161770 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 11 01:23:01.161777 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 11 01:23:01.161784 kernel: ITS: No ITS available, not enabling LPIs Mar 11 01:23:01.161791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 01:23:01.161798 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:23:01.161805 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 11 01:23:01.161812 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 11 01:23:01.161819 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 11 01:23:01.161826 kernel: Console: colour dummy device 80x25 Mar 11 01:23:01.161835 kernel: printk: console [tty1] enabled Mar 11 01:23:01.161842 kernel: ACPI: Core revision 20230628 Mar 11 01:23:01.161849 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 11 01:23:01.161856 kernel: pid_max: default: 32768 minimum: 301 Mar 11 01:23:01.161863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 01:23:01.161870 kernel: landlock: Up and running. Mar 11 01:23:01.161877 kernel: SELinux: Initializing. Mar 11 01:23:01.161884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.161891 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.161900 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:23:01.161907 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:23:01.161915 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 11 01:23:01.161921 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 11 01:23:01.161928 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 11 01:23:01.161935 kernel: rcu: Hierarchical SRCU implementation. Mar 11 01:23:01.161942 kernel: rcu: Max phase no-delay instances is 400. Mar 11 01:23:01.161949 kernel: Remapping and enabling EFI services. Mar 11 01:23:01.161964 kernel: smp: Bringing up secondary CPUs ... Mar 11 01:23:01.161971 kernel: Detected PIPT I-cache on CPU1 Mar 11 01:23:01.161978 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 11 01:23:01.161985 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:23:01.161994 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 11 01:23:01.162002 kernel: smp: Brought up 1 node, 2 CPUs Mar 11 01:23:01.162009 kernel: SMP: Total of 2 processors activated. Mar 11 01:23:01.162017 kernel: CPU features: detected: 32-bit EL0 Support Mar 11 01:23:01.162024 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 11 01:23:01.162034 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 11 01:23:01.162042 kernel: CPU features: detected: CRC32 instructions Mar 11 01:23:01.162049 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 11 01:23:01.162057 kernel: CPU features: detected: LSE atomic instructions Mar 11 01:23:01.162064 kernel: CPU features: detected: Privileged Access Never Mar 11 01:23:01.162071 kernel: CPU: All CPU(s) started at EL1 Mar 11 01:23:01.162079 kernel: alternatives: applying system-wide alternatives Mar 11 01:23:01.162086 kernel: devtmpfs: initialized Mar 11 01:23:01.162094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 01:23:01.162103 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 11 01:23:01.162110 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 01:23:01.162118 kernel: SMBIOS 3.1.0 present. Mar 11 01:23:01.162125 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 11 01:23:01.162133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 01:23:01.162140 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 11 01:23:01.162148 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 11 01:23:01.162155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 11 01:23:01.162162 kernel: audit: initializing netlink subsys (disabled) Mar 11 01:23:01.162172 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 11 01:23:01.162179 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 01:23:01.162186 kernel: cpuidle: using governor menu Mar 11 01:23:01.162194 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 11 01:23:01.162201 kernel: ASID allocator initialised with 32768 entries Mar 11 01:23:01.162209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 01:23:01.162216 kernel: Serial: AMBA PL011 UART driver Mar 11 01:23:01.162223 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 11 01:23:01.162231 kernel: Modules: 0 pages in range for non-PLT usage Mar 11 01:23:01.162240 kernel: Modules: 509008 pages in range for PLT usage Mar 11 01:23:01.162247 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 01:23:01.162255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 01:23:01.162262 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 11 01:23:01.162270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 11 01:23:01.162277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 01:23:01.162284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 01:23:01.162292 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 11 01:23:01.162299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 11 01:23:01.162308 kernel: ACPI: Added _OSI(Module Device) Mar 11 01:23:01.164373 kernel: ACPI: Added _OSI(Processor Device) Mar 11 01:23:01.164388 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 01:23:01.164396 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 01:23:01.164403 kernel: ACPI: Interpreter enabled Mar 11 01:23:01.164411 kernel: ACPI: Using GIC for interrupt routing Mar 11 01:23:01.164419 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 11 01:23:01.164426 kernel: printk: console [ttyAMA0] enabled Mar 11 01:23:01.164433 kernel: printk: bootconsole [pl11] disabled Mar 11 01:23:01.164445 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 11 01:23:01.164453 kernel: iommu: Default domain type: Translated Mar 11 01:23:01.164460 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 11 01:23:01.164467 kernel: efivars: Registered efivars operations Mar 11 01:23:01.164475 kernel: vgaarb: loaded Mar 11 01:23:01.164482 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 11 01:23:01.164489 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 01:23:01.164497 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 01:23:01.164504 kernel: pnp: PnP ACPI init Mar 11 01:23:01.164513 kernel: pnp: PnP ACPI: found 0 devices Mar 11 01:23:01.164521 kernel: NET: Registered PF_INET protocol family Mar 11 01:23:01.164528 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 01:23:01.164536 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 01:23:01.164544 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 01:23:01.164551 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 01:23:01.164559 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 01:23:01.164566 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 01:23:01.164574 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.164583 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.164590 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 01:23:01.164598 kernel: PCI: CLS 0 bytes, default 64 Mar 11 01:23:01.164605 kernel: kvm [1]: HYP mode not available Mar 11 01:23:01.164612 kernel: Initialise system trusted keyrings Mar 11 01:23:01.164620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 01:23:01.164627 kernel: Key type asymmetric registered Mar 11 01:23:01.164634 kernel: Asymmetric key parser 'x509' registered Mar 11 01:23:01.164642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 11 01:23:01.164651 kernel: io scheduler mq-deadline registered Mar 11 01:23:01.164658 kernel: io scheduler kyber registered Mar 11 01:23:01.164666 kernel: io scheduler bfq registered Mar 11 01:23:01.164673 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 01:23:01.164680 kernel: thunder_xcv, ver 1.0 Mar 11 01:23:01.164688 kernel: thunder_bgx, ver 1.0 Mar 11 01:23:01.164695 kernel: nicpf, ver 1.0 Mar 11 01:23:01.164702 kernel: nicvf, ver 1.0 Mar 11 01:23:01.164837 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 11 01:23:01.164913 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-11T01:23:00 UTC (1773192180) Mar 11 01:23:01.164924 kernel: efifb: probing for efifb Mar 11 01:23:01.164931 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 11 01:23:01.164939 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 11 01:23:01.164946 kernel: efifb: scrolling: redraw Mar 11 01:23:01.164953 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 11 01:23:01.164961 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:23:01.164968 kernel: fb0: EFI VGA frame buffer device Mar 11 01:23:01.164977 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 11 01:23:01.164985 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 11 01:23:01.164992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 11 01:23:01.165000 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 11 01:23:01.165007 kernel: watchdog: Hard watchdog permanently disabled Mar 11 01:23:01.165014 kernel: NET: Registered PF_INET6 protocol family Mar 11 01:23:01.165022 kernel: Segment Routing with IPv6 Mar 11 01:23:01.165029 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 01:23:01.165036 kernel: NET: Registered PF_PACKET protocol family Mar 11 01:23:01.165045 kernel: Key type dns_resolver registered Mar 11 01:23:01.165053 kernel: registered taskstats version 1 Mar 11 01:23:01.165060 kernel: Loading compiled-in X.509 certificates Mar 11 01:23:01.165068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e2d32b7c633536fa6eb6e76ba97909ae7ad11d09' Mar 11 01:23:01.165075 kernel: Key type .fscrypt registered Mar 11 01:23:01.165082 kernel: Key type fscrypt-provisioning registered Mar 11 01:23:01.165089 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 01:23:01.165097 kernel: ima: Allocated hash algorithm: sha1 Mar 11 01:23:01.165105 kernel: ima: No architecture policies found Mar 11 01:23:01.165113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 11 01:23:01.165121 kernel: clk: Disabling unused clocks Mar 11 01:23:01.165128 kernel: Freeing unused kernel memory: 39424K Mar 11 01:23:01.165136 kernel: Run /init as init process Mar 11 01:23:01.165143 kernel: with arguments: Mar 11 01:23:01.165150 kernel: /init Mar 11 01:23:01.165157 kernel: with environment: Mar 11 01:23:01.165164 kernel: HOME=/ Mar 11 01:23:01.165171 kernel: TERM=linux Mar 11 01:23:01.165181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:23:01.165192 systemd[1]: Detected virtualization microsoft. Mar 11 01:23:01.165200 systemd[1]: Detected architecture arm64. Mar 11 01:23:01.165207 systemd[1]: Running in initrd. Mar 11 01:23:01.165215 systemd[1]: No hostname configured, using default hostname. Mar 11 01:23:01.165222 systemd[1]: Hostname set to . Mar 11 01:23:01.165231 systemd[1]: Initializing machine ID from random generator. Mar 11 01:23:01.165240 systemd[1]: Queued start job for default target initrd.target. Mar 11 01:23:01.165248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:01.165256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:01.165265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 01:23:01.165273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:23:01.165281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 01:23:01.165289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 01:23:01.165299 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 01:23:01.165309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 01:23:01.165382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:01.165392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:01.165400 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:23:01.165408 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:23:01.165416 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:23:01.165424 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:23:01.165432 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:01.165442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:01.165451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 01:23:01.165458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 01:23:01.165466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:01.165474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:01.165482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:01.165490 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:23:01.165498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 01:23:01.165508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:23:01.165516 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 01:23:01.165524 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 01:23:01.165532 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:23:01.165558 systemd-journald[217]: Collecting audit messages is disabled. Mar 11 01:23:01.165580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:23:01.165588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:01.165596 systemd-journald[217]: Journal started Mar 11 01:23:01.165615 systemd-journald[217]: Runtime Journal (/run/log/journal/d43bf85347104897bff76f46285c5a94) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:23:01.170482 systemd-modules-load[218]: Inserted module 'overlay' Mar 11 01:23:01.199371 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:23:01.199418 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 01:23:01.206383 kernel: Bridge firewalling registered Mar 11 01:23:01.204180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:01.206589 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 11 01:23:01.213335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:01.221914 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 01:23:01.229026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:01.239275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:01.258676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:01.271480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:23:01.283472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:23:01.299266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:23:01.312881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:01.323827 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:01.333037 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:01.344758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:01.360615 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 01:23:01.367468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:23:01.384296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:23:01.398603 dracut-cmdline[250]: dracut-dracut-053 Mar 11 01:23:01.404493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:01.416893 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:23:01.414853 systemd-resolved[252]: Positive Trust Anchors: Mar 11 01:23:01.414863 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:23:01.414895 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:23:01.417296 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 11 01:23:01.418097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:23:01.445833 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:01.559341 kernel: SCSI subsystem initialized Mar 11 01:23:01.566327 kernel: Loading iSCSI transport class v2.0-870. Mar 11 01:23:01.576336 kernel: iscsi: registered transport (tcp) Mar 11 01:23:01.592412 kernel: iscsi: registered transport (qla4xxx) Mar 11 01:23:01.592470 kernel: QLogic iSCSI HBA Driver Mar 11 01:23:01.625884 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:01.636550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 01:23:01.664233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 01:23:01.664282 kernel: device-mapper: uevent: version 1.0.3 Mar 11 01:23:01.669327 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 01:23:01.715331 kernel: raid6: neonx8 gen() 15821 MB/s Mar 11 01:23:01.734321 kernel: raid6: neonx4 gen() 15700 MB/s Mar 11 01:23:01.753321 kernel: raid6: neonx2 gen() 13230 MB/s Mar 11 01:23:01.773321 kernel: raid6: neonx1 gen() 10483 MB/s Mar 11 01:23:01.792325 kernel: raid6: int64x8 gen() 6979 MB/s Mar 11 01:23:01.811324 kernel: raid6: int64x4 gen() 7372 MB/s Mar 11 01:23:01.831321 kernel: raid6: int64x2 gen() 6146 MB/s Mar 11 01:23:01.853395 kernel: raid6: int64x1 gen() 5068 MB/s Mar 11 01:23:01.853414 kernel: raid6: using algorithm neonx8 gen() 15821 MB/s Mar 11 01:23:01.875964 kernel: raid6: .... xor() 11989 MB/s, rmw enabled Mar 11 01:23:01.875983 kernel: raid6: using neon recovery algorithm Mar 11 01:23:01.886375 kernel: xor: measuring software checksum speed Mar 11 01:23:01.886388 kernel: 8regs : 19778 MB/sec Mar 11 01:23:01.889070 kernel: 32regs : 19627 MB/sec Mar 11 01:23:01.895194 kernel: arm64_neon : 26238 MB/sec Mar 11 01:23:01.895204 kernel: xor: using function: arm64_neon (26238 MB/sec) Mar 11 01:23:01.945369 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 01:23:01.954443 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:01.968448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:01.987375 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 11 01:23:01.991851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:02.006496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 01:23:02.026578 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Mar 11 01:23:02.054552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:02.073538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:23:02.109882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:02.127457 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 01:23:02.139536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:02.154413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:02.166636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:02.176711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:23:02.194538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 01:23:02.212834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:02.221604 kernel: hv_vmbus: Vmbus version:5.3 Mar 11 01:23:02.227895 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:23:02.232176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:02.250004 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:02.260830 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 11 01:23:02.260851 kernel: hv_vmbus: registering driver hid_hyperv Mar 11 01:23:02.260861 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 11 01:23:02.265869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:02.284516 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 11 01:23:02.284536 kernel: hv_vmbus: registering driver hv_storvsc Mar 11 01:23:02.284549 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 11 01:23:02.269998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.293946 kernel: scsi host1: storvsc_host_t Mar 11 01:23:02.296150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.344491 kernel: scsi host0: storvsc_host_t Mar 11 01:23:02.344664 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 11 01:23:02.344676 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 11 01:23:02.344779 kernel: PTP clock support registered Mar 11 01:23:02.344790 kernel: hv_vmbus: registering driver hv_netvsc Mar 11 01:23:02.344799 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 11 01:23:02.344894 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 11 01:23:02.337652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.365127 kernel: hv_utils: Registering HyperV Utility Driver Mar 11 01:23:02.365148 kernel: hv_vmbus: registering driver hv_utils Mar 11 01:23:02.357544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:02.377043 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 11 01:23:02.377216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 11 01:23:02.357630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.768890 kernel: hv_utils: Shutdown IC version 3.2 Mar 11 01:23:02.768911 kernel: hv_utils: TimeSync IC version 4.0 Mar 11 01:23:02.768921 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 01:23:02.768930 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 11 01:23:02.744466 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 11 01:23:02.790960 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 11 01:23:02.793467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#307 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:23:02.793628 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 11 01:23:02.793724 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 11 01:23:02.793809 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 11 01:23:02.793891 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 11 01:23:02.763599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.812603 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: VF slot 1 added Mar 11 01:23:02.812744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:02.818397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.845514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 11 01:23:02.845715 kernel: hv_vmbus: registering driver hv_pci Mar 11 01:23:02.845726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:23:02.845848 kernel: hv_pci c40579f7-27a7-4a74-bded-df18ecf0c4c8: PCI VMBus probing: Using version 0x10004 Mar 11 01:23:02.848581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:02.871759 kernel: hv_pci c40579f7-27a7-4a74-bded-df18ecf0c4c8: PCI host bridge to bus 27a7:00 Mar 11 01:23:02.872039 kernel: pci_bus 27a7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 11 01:23:02.872252 kernel: pci_bus 27a7:00: No busn resource found for root bus, will use [bus 00-ff] Mar 11 01:23:02.872390 kernel: pci 27a7:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 11 01:23:02.872476 kernel: pci 27a7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:23:02.872498 kernel: pci 27a7:00:02.0: enabling Extended Tags Mar 11 01:23:02.872513 kernel: pci 27a7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 27a7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 11 01:23:02.906085 kernel: pci_bus 27a7:00: busn_res: [bus 00-ff] end is updated to 00 Mar 11 01:23:02.906262 kernel: pci 27a7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:23:02.923463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:02.951998 kernel: mlx5_core 27a7:00:02.0: enabling device (0000 -> 0002) Mar 11 01:23:02.957442 kernel: mlx5_core 27a7:00:02.0: firmware version: 16.30.5026 Mar 11 01:23:03.153909 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: VF registering: eth1 Mar 11 01:23:03.154099 kernel: mlx5_core 27a7:00:02.0 eth1: joined to eth0 Mar 11 01:23:03.160545 kernel: mlx5_core 27a7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 11 01:23:03.170455 kernel: mlx5_core 27a7:00:02.0 enP10151s1: renamed from eth1 Mar 11 01:23:03.363463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 11 01:23:03.374009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 11 01:23:03.425180 kernel: BTRFS: device fsid 6268782d-ce1a-4049-a9c9-846620fa6ee9 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (497) Mar 11 01:23:03.437818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 11 01:23:03.443112 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 11 01:23:03.466502 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Mar 11 01:23:03.474609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 01:23:03.491695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:23:03.506453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:03.515445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:04.525511 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:04.525674 disk-uuid[611]: The operation has completed successfully. Mar 11 01:23:04.587053 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 01:23:04.587147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 01:23:04.625613 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 01:23:04.635520 sh[697]: Success Mar 11 01:23:04.664458 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 11 01:23:04.885301 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 01:23:04.890620 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 01:23:04.903552 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 01:23:04.930469 kernel: BTRFS info (device dm-0): first mount of filesystem 6268782d-ce1a-4049-a9c9-846620fa6ee9 Mar 11 01:23:04.930528 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:04.935594 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 01:23:04.939666 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 01:23:04.942863 kernel: BTRFS info (device dm-0): using free space tree Mar 11 01:23:05.235830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 01:23:05.240251 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 01:23:05.256697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 01:23:05.261572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 01:23:05.295525 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:05.295565 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:05.300186 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:05.336389 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:05.345108 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 01:23:05.354340 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:05.359166 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 01:23:05.373615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 01:23:05.378531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:05.389614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:23:05.426924 systemd-networkd[881]: lo: Link UP Mar 11 01:23:05.426935 systemd-networkd[881]: lo: Gained carrier Mar 11 01:23:05.428813 systemd-networkd[881]: Enumeration completed Mar 11 01:23:05.429523 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:23:05.434311 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:05.434315 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:23:05.435245 systemd[1]: Reached target network.target - Network. Mar 11 01:23:05.514452 kernel: mlx5_core 27a7:00:02.0 enP10151s1: Link up Mar 11 01:23:05.554448 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: Data path switched to VF: enP10151s1 Mar 11 01:23:05.554909 systemd-networkd[881]: enP10151s1: Link UP Mar 11 01:23:05.555028 systemd-networkd[881]: eth0: Link UP Mar 11 01:23:05.555128 systemd-networkd[881]: eth0: Gained carrier Mar 11 01:23:05.555137 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:05.572911 systemd-networkd[881]: enP10151s1: Gained carrier Mar 11 01:23:05.582484 systemd-networkd[881]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:23:06.131624 ignition[876]: Ignition 2.19.0 Mar 11 01:23:06.131635 ignition[876]: Stage: fetch-offline Mar 11 01:23:06.133089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:06.131670 ignition[876]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.131678 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.131775 ignition[876]: parsed url from cmdline: "" Mar 11 01:23:06.131778 ignition[876]: no config URL provided Mar 11 01:23:06.131783 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:23:06.156667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 11 01:23:06.131790 ignition[876]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:23:06.131795 ignition[876]: failed to fetch config: resource requires networking Mar 11 01:23:06.131957 ignition[876]: Ignition finished successfully Mar 11 01:23:06.176953 ignition[891]: Ignition 2.19.0 Mar 11 01:23:06.176960 ignition[891]: Stage: fetch Mar 11 01:23:06.177161 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.177173 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.177273 ignition[891]: parsed url from cmdline: "" Mar 11 01:23:06.177277 ignition[891]: no config URL provided Mar 11 01:23:06.177281 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:23:06.177288 ignition[891]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:23:06.177313 ignition[891]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 11 01:23:06.286465 ignition[891]: GET result: OK Mar 11 01:23:06.287359 ignition[891]: config has been read from IMDS userdata Mar 11 01:23:06.287403 ignition[891]: parsing config with SHA512: 88857634428ae7527be7492a81b254df9662704dadc90e3795067001991601853d715aaf0e8b2a576fc1573060550cbed4299309add146d75b91e85524b8cfba Mar 11 01:23:06.291413 unknown[891]: fetched base config from "system" Mar 11 01:23:06.291901 ignition[891]: fetch: fetch complete Mar 11 01:23:06.291420 unknown[891]: fetched base config from "system" Mar 11 01:23:06.291906 ignition[891]: fetch: fetch passed Mar 11 01:23:06.291426 unknown[891]: fetched user config from "azure" Mar 11 01:23:06.291958 ignition[891]: Ignition finished successfully Mar 11 01:23:06.294275 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 11 01:23:06.313683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 01:23:06.329702 ignition[897]: Ignition 2.19.0 Mar 11 01:23:06.329710 ignition[897]: Stage: kargs Mar 11 01:23:06.333553 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 01:23:06.329886 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.329902 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.330872 ignition[897]: kargs: kargs passed Mar 11 01:23:06.347621 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 01:23:06.330914 ignition[897]: Ignition finished successfully Mar 11 01:23:06.368848 ignition[903]: Ignition 2.19.0 Mar 11 01:23:06.368861 ignition[903]: Stage: disks Mar 11 01:23:06.369020 ignition[903]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.374392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 01:23:06.369030 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.378767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:06.369943 ignition[903]: disks: disks passed Mar 11 01:23:06.386601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 01:23:06.369989 ignition[903]: Ignition finished successfully Mar 11 01:23:06.395252 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:23:06.403589 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:23:06.412198 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:23:06.430632 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 01:23:06.503086 systemd-fsck[912]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 11 01:23:06.511907 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 01:23:06.528599 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 01:23:06.583457 kernel: EXT4-fs (sda9): mounted filesystem 19488164-8e25-4d6a-86d9-f70a8ed432cb r/w with ordered data mode. Quota mode: none. Mar 11 01:23:06.583668 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 01:23:06.587637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 01:23:06.626495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:06.644441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (923) Mar 11 01:23:06.656204 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:06.656253 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:06.659797 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:06.662536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 01:23:06.673820 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 11 01:23:06.685845 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:06.680130 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 01:23:06.680164 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:06.696892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:06.703315 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 01:23:06.714622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 01:23:07.234542 systemd-networkd[881]: eth0: Gained IPv6LL Mar 11 01:23:07.253359 coreos-metadata[938]: Mar 11 01:23:07.253 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:23:07.261254 coreos-metadata[938]: Mar 11 01:23:07.261 INFO Fetch successful Mar 11 01:23:07.265242 coreos-metadata[938]: Mar 11 01:23:07.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:23:07.283233 coreos-metadata[938]: Mar 11 01:23:07.283 INFO Fetch successful Mar 11 01:23:07.297486 coreos-metadata[938]: Mar 11 01:23:07.297 INFO wrote hostname ci-4081.3.6-n-541af3988c to /sysroot/etc/hostname Mar 11 01:23:07.305559 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:23:07.533734 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 01:23:07.555077 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Mar 11 01:23:07.575796 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 01:23:07.582657 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 01:23:08.564706 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:08.580585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 01:23:08.586631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 01:23:08.605110 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 01:23:08.609001 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:08.628651 ignition[1042]: INFO : Ignition 2.19.0 Mar 11 01:23:08.632571 ignition[1042]: INFO : Stage: mount Mar 11 01:23:08.632571 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:08.632571 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:08.645623 ignition[1042]: INFO : mount: mount passed Mar 11 01:23:08.645623 ignition[1042]: INFO : Ignition finished successfully Mar 11 01:23:08.645669 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 01:23:08.663611 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 01:23:08.673111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 01:23:08.682127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:08.708457 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1053) Mar 11 01:23:08.708514 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:08.713022 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:08.716280 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:08.722438 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:08.724156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:08.747630 ignition[1070]: INFO : Ignition 2.19.0 Mar 11 01:23:08.747630 ignition[1070]: INFO : Stage: files Mar 11 01:23:08.753799 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:08.753799 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:08.753799 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Mar 11 01:23:08.753799 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 01:23:08.753799 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 01:23:08.849881 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 01:23:08.856006 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 01:23:08.856006 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 01:23:08.850283 unknown[1070]: wrote ssh authorized keys file for user: core Mar 11 01:23:08.871876 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:23:08.871876 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 11 01:23:08.905243 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 01:23:09.086118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 11 01:23:09.491126 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 01:23:10.385656 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:10.385656 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 01:23:10.402335 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: files passed Mar 11 01:23:10.411675 ignition[1070]: INFO : Ignition finished successfully Mar 11 01:23:10.410676 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 01:23:10.442675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 01:23:10.456601 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 01:23:10.467055 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 01:23:10.467206 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 01:23:10.507692 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.507692 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.526728 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.510080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:10.520362 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 01:23:10.548674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 01:23:10.584311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 01:23:10.584454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 01:23:10.593631 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 01:23:10.602839 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 01:23:10.611013 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 01:23:10.625611 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 01:23:10.643949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:10.655686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 01:23:10.669871 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:10.674807 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:10.684562 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 01:23:10.693030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 01:23:10.693153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:10.705224 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 01:23:10.709764 systemd[1]: Stopped target basic.target - Basic System. Mar 11 01:23:10.718309 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 01:23:10.726837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:10.735400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:10.744951 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 01:23:10.753966 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:10.763615 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 01:23:10.772388 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 01:23:10.781668 systemd[1]: Stopped target swap.target - Swaps. Mar 11 01:23:10.789710 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 01:23:10.789831 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:10.801740 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:10.806389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:10.815237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 01:23:10.815307 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:10.824858 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 01:23:10.824972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:10.838492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 01:23:10.838613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:10.844432 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 01:23:10.844524 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 01:23:10.898286 ignition[1122]: INFO : Ignition 2.19.0 Mar 11 01:23:10.898286 ignition[1122]: INFO : Stage: umount Mar 11 01:23:10.852626 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 11 01:23:10.924392 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:10.924392 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:10.924392 ignition[1122]: INFO : umount: umount passed Mar 11 01:23:10.924392 ignition[1122]: INFO : Ignition finished successfully Mar 11 01:23:10.852715 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:23:10.876665 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 01:23:10.898631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 01:23:10.903194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 01:23:10.903369 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:10.909634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 01:23:10.909735 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:10.921289 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 01:23:10.921393 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 01:23:10.930006 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 01:23:10.930254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 01:23:10.945722 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 01:23:10.945790 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 01:23:10.953082 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 11 01:23:10.953123 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 11 01:23:10.957597 systemd[1]: Stopped target network.target - Network. Mar 11 01:23:10.969067 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 01:23:10.969132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:10.974584 systemd[1]: Stopped target paths.target - Path Units. Mar 11 01:23:10.983629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 01:23:10.987460 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:10.993026 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 01:23:11.000590 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 01:23:11.011045 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 01:23:11.011107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:11.020910 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 01:23:11.020963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:11.029331 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 01:23:11.029381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 01:23:11.037879 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 01:23:11.037917 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:11.046180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 01:23:11.235740 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: Data path switched from VF: enP10151s1 Mar 11 01:23:11.055832 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 01:23:11.065138 systemd-networkd[881]: eth0: DHCPv6 lease lost Mar 11 01:23:11.066768 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 01:23:11.068019 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 01:23:11.068112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 01:23:11.075731 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 01:23:11.075822 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 01:23:11.086220 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 01:23:11.086285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:11.107631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 01:23:11.115985 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 01:23:11.116071 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:11.126191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:11.138732 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 01:23:11.139094 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 01:23:11.171703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 01:23:11.171804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:11.178813 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 01:23:11.178874 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:11.187808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 01:23:11.187849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:11.197324 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 01:23:11.197474 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:11.206244 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 01:23:11.206284 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:11.215690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 01:23:11.215725 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:11.231384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 01:23:11.231480 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:11.244767 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 01:23:11.244809 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:11.255736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:23:11.255785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:11.283731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 01:23:11.296507 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 01:23:11.296574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:11.317149 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 01:23:11.317208 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:11.323076 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 01:23:11.323116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:11.333715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:11.333755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:11.343010 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 01:23:11.343115 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 01:23:11.351878 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 01:23:11.351967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 01:23:11.561152 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 01:23:11.561308 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 01:23:11.569058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 01:23:11.580838 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 01:23:11.580903 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:11.595714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 01:23:11.606950 systemd[1]: Switching root. Mar 11 01:23:11.697426 systemd-journald[217]: Journal stopped Mar 11 01:23:01.159070 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 11 01:23:01.159091 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 10 23:05:53 -00 2026 Mar 11 01:23:01.159099 kernel: KASLR enabled Mar 11 01:23:01.159105 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 11 01:23:01.159112 kernel: printk: bootconsole [pl11] enabled Mar 11 01:23:01.159118 kernel: efi: EFI v2.7 by EDK II Mar 11 01:23:01.159125 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 11 01:23:01.159131 kernel: random: crng init done Mar 11 01:23:01.159137 kernel: ACPI: Early table checksum verification disabled Mar 11 01:23:01.159143 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 11 01:23:01.159149 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159155 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159163 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 11 01:23:01.159169 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159176 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159183 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159189 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159197 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159204 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159210 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 11 01:23:01.159216 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:23:01.159223 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 11 01:23:01.159229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 11 01:23:01.159236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 11 01:23:01.159242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 11 01:23:01.159248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 11 01:23:01.159255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 11 01:23:01.159261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 11 01:23:01.159269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 11 01:23:01.159276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 11 01:23:01.159282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 11 01:23:01.159288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 11 01:23:01.159295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 11 01:23:01.159301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 11 01:23:01.159307 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 11 01:23:01.161338 kernel: Zone ranges: Mar 11 01:23:01.161366 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 11 01:23:01.161374 kernel: DMA32 empty Mar 11 01:23:01.161381 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:23:01.161388 kernel: Movable zone start for each node Mar 11 01:23:01.161402 kernel: Early memory node ranges Mar 11 01:23:01.161409 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 11 01:23:01.161416 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 11 01:23:01.161423 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 11 01:23:01.161430 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 11 01:23:01.161438 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 11 01:23:01.161445 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 11 01:23:01.161452 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:23:01.161460 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 11 01:23:01.161467 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 11 01:23:01.161473 kernel: psci: probing for conduit method from ACPI. Mar 11 01:23:01.161480 kernel: psci: PSCIv1.1 detected in firmware. Mar 11 01:23:01.161487 kernel: psci: Using standard PSCI v0.2 function IDs Mar 11 01:23:01.161494 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 11 01:23:01.161500 kernel: psci: SMC Calling Convention v1.4 Mar 11 01:23:01.161507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 11 01:23:01.161514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 11 01:23:01.161523 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 11 01:23:01.161529 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 11 01:23:01.161537 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 11 01:23:01.161543 kernel: Detected PIPT I-cache on CPU0 Mar 11 01:23:01.161550 kernel: CPU features: detected: GIC system register CPU interface Mar 11 01:23:01.161557 kernel: CPU features: detected: Hardware dirty bit management Mar 11 01:23:01.161564 kernel: CPU features: detected: Spectre-BHB Mar 11 01:23:01.161571 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 11 01:23:01.161577 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 11 01:23:01.161584 kernel: CPU features: detected: ARM erratum 1418040 Mar 11 01:23:01.161591 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 11 01:23:01.161600 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 11 01:23:01.161607 kernel: alternatives: applying boot alternatives Mar 11 01:23:01.161615 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:23:01.161622 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 01:23:01.161629 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 01:23:01.161636 kernel: Fallback order for Node 0: 0 Mar 11 01:23:01.161643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 11 01:23:01.161650 kernel: Policy zone: Normal Mar 11 01:23:01.161656 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 01:23:01.161663 kernel: software IO TLB: area num 2. Mar 11 01:23:01.161670 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 11 01:23:01.161678 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 11 01:23:01.161686 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 11 01:23:01.161692 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 01:23:01.161700 kernel: rcu: RCU event tracing is enabled. Mar 11 01:23:01.161707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 11 01:23:01.161714 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 01:23:01.161721 kernel: Tracing variant of Tasks RCU enabled. Mar 11 01:23:01.161728 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 01:23:01.161735 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 11 01:23:01.161741 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 11 01:23:01.161748 kernel: GICv3: 960 SPIs implemented Mar 11 01:23:01.161757 kernel: GICv3: 0 Extended SPIs implemented Mar 11 01:23:01.161763 kernel: Root IRQ handler: gic_handle_irq Mar 11 01:23:01.161770 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 11 01:23:01.161777 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 11 01:23:01.161784 kernel: ITS: No ITS available, not enabling LPIs Mar 11 01:23:01.161791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 01:23:01.161798 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:23:01.161805 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 11 01:23:01.161812 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 11 01:23:01.161819 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 11 01:23:01.161826 kernel: Console: colour dummy device 80x25 Mar 11 01:23:01.161835 kernel: printk: console [tty1] enabled Mar 11 01:23:01.161842 kernel: ACPI: Core revision 20230628 Mar 11 01:23:01.161849 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 11 01:23:01.161856 kernel: pid_max: default: 32768 minimum: 301 Mar 11 01:23:01.161863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 01:23:01.161870 kernel: landlock: Up and running. Mar 11 01:23:01.161877 kernel: SELinux: Initializing. Mar 11 01:23:01.161884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.161891 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.161900 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:23:01.161907 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:23:01.161915 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 11 01:23:01.161921 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 11 01:23:01.161928 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 11 01:23:01.161935 kernel: rcu: Hierarchical SRCU implementation. Mar 11 01:23:01.161942 kernel: rcu: Max phase no-delay instances is 400. Mar 11 01:23:01.161949 kernel: Remapping and enabling EFI services. Mar 11 01:23:01.161964 kernel: smp: Bringing up secondary CPUs ... Mar 11 01:23:01.161971 kernel: Detected PIPT I-cache on CPU1 Mar 11 01:23:01.161978 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 11 01:23:01.161985 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:23:01.161994 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 11 01:23:01.162002 kernel: smp: Brought up 1 node, 2 CPUs Mar 11 01:23:01.162009 kernel: SMP: Total of 2 processors activated. Mar 11 01:23:01.162017 kernel: CPU features: detected: 32-bit EL0 Support Mar 11 01:23:01.162024 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 11 01:23:01.162034 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 11 01:23:01.162042 kernel: CPU features: detected: CRC32 instructions Mar 11 01:23:01.162049 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 11 01:23:01.162057 kernel: CPU features: detected: LSE atomic instructions Mar 11 01:23:01.162064 kernel: CPU features: detected: Privileged Access Never Mar 11 01:23:01.162071 kernel: CPU: All CPU(s) started at EL1 Mar 11 01:23:01.162079 kernel: alternatives: applying system-wide alternatives Mar 11 01:23:01.162086 kernel: devtmpfs: initialized Mar 11 01:23:01.162094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 01:23:01.162103 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 11 01:23:01.162110 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 01:23:01.162118 kernel: SMBIOS 3.1.0 present. Mar 11 01:23:01.162125 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 11 01:23:01.162133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 01:23:01.162140 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 11 01:23:01.162148 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 11 01:23:01.162155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 11 01:23:01.162162 kernel: audit: initializing netlink subsys (disabled) Mar 11 01:23:01.162172 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 11 01:23:01.162179 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 01:23:01.162186 kernel: cpuidle: using governor menu Mar 11 01:23:01.162194 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 11 01:23:01.162201 kernel: ASID allocator initialised with 32768 entries Mar 11 01:23:01.162209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 01:23:01.162216 kernel: Serial: AMBA PL011 UART driver Mar 11 01:23:01.162223 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 11 01:23:01.162231 kernel: Modules: 0 pages in range for non-PLT usage Mar 11 01:23:01.162240 kernel: Modules: 509008 pages in range for PLT usage Mar 11 01:23:01.162247 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 01:23:01.162255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 01:23:01.162262 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 11 01:23:01.162270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 11 01:23:01.162277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 01:23:01.162284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 01:23:01.162292 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 11 01:23:01.162299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 11 01:23:01.162308 kernel: ACPI: Added _OSI(Module Device) Mar 11 01:23:01.164373 kernel: ACPI: Added _OSI(Processor Device) Mar 11 01:23:01.164388 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 01:23:01.164396 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 01:23:01.164403 kernel: ACPI: Interpreter enabled Mar 11 01:23:01.164411 kernel: ACPI: Using GIC for interrupt routing Mar 11 01:23:01.164419 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 11 01:23:01.164426 kernel: printk: console [ttyAMA0] enabled Mar 11 01:23:01.164433 kernel: printk: bootconsole [pl11] disabled Mar 11 01:23:01.164445 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 11 01:23:01.164453 kernel: iommu: Default domain type: Translated Mar 11 01:23:01.164460 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 11 01:23:01.164467 kernel: efivars: Registered efivars operations Mar 11 01:23:01.164475 kernel: vgaarb: loaded Mar 11 01:23:01.164482 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 11 01:23:01.164489 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 01:23:01.164497 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 01:23:01.164504 kernel: pnp: PnP ACPI init Mar 11 01:23:01.164513 kernel: pnp: PnP ACPI: found 0 devices Mar 11 01:23:01.164521 kernel: NET: Registered PF_INET protocol family Mar 11 01:23:01.164528 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 01:23:01.164536 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 01:23:01.164544 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 01:23:01.164551 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 01:23:01.164559 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 01:23:01.164566 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 01:23:01.164574 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.164583 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:23:01.164590 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 01:23:01.164598 kernel: PCI: CLS 0 bytes, default 64 Mar 11 01:23:01.164605 kernel: kvm [1]: HYP mode not available Mar 11 01:23:01.164612 kernel: Initialise system trusted keyrings Mar 11 01:23:01.164620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 01:23:01.164627 kernel: Key type asymmetric registered Mar 11 01:23:01.164634 kernel: Asymmetric key parser 'x509' registered Mar 11 01:23:01.164642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 11 01:23:01.164651 kernel: io scheduler mq-deadline registered Mar 11 01:23:01.164658 kernel: io scheduler kyber registered Mar 11 01:23:01.164666 kernel: io scheduler bfq registered Mar 11 01:23:01.164673 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 01:23:01.164680 kernel: thunder_xcv, ver 1.0 Mar 11 01:23:01.164688 kernel: thunder_bgx, ver 1.0 Mar 11 01:23:01.164695 kernel: nicpf, ver 1.0 Mar 11 01:23:01.164702 kernel: nicvf, ver 1.0 Mar 11 01:23:01.164837 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 11 01:23:01.164913 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-11T01:23:00 UTC (1773192180) Mar 11 01:23:01.164924 kernel: efifb: probing for efifb Mar 11 01:23:01.164931 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 11 01:23:01.164939 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 11 01:23:01.164946 kernel: efifb: scrolling: redraw Mar 11 01:23:01.164953 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 11 01:23:01.164961 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:23:01.164968 kernel: fb0: EFI VGA frame buffer device Mar 11 01:23:01.164977 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 11 01:23:01.164985 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 11 01:23:01.164992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 11 01:23:01.165000 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 11 01:23:01.165007 kernel: watchdog: Hard watchdog permanently disabled Mar 11 01:23:01.165014 kernel: NET: Registered PF_INET6 protocol family Mar 11 01:23:01.165022 kernel: Segment Routing with IPv6 Mar 11 01:23:01.165029 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 01:23:01.165036 kernel: NET: Registered PF_PACKET protocol family Mar 11 01:23:01.165045 kernel: Key type dns_resolver registered Mar 11 01:23:01.165053 kernel: registered taskstats version 1 Mar 11 01:23:01.165060 kernel: Loading compiled-in X.509 certificates Mar 11 01:23:01.165068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e2d32b7c633536fa6eb6e76ba97909ae7ad11d09' Mar 11 01:23:01.165075 kernel: Key type .fscrypt registered Mar 11 01:23:01.165082 kernel: Key type fscrypt-provisioning registered Mar 11 01:23:01.165089 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 01:23:01.165097 kernel: ima: Allocated hash algorithm: sha1 Mar 11 01:23:01.165105 kernel: ima: No architecture policies found Mar 11 01:23:01.165113 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 11 01:23:01.165121 kernel: clk: Disabling unused clocks Mar 11 01:23:01.165128 kernel: Freeing unused kernel memory: 39424K Mar 11 01:23:01.165136 kernel: Run /init as init process Mar 11 01:23:01.165143 kernel: with arguments: Mar 11 01:23:01.165150 kernel: /init Mar 11 01:23:01.165157 kernel: with environment: Mar 11 01:23:01.165164 kernel: HOME=/ Mar 11 01:23:01.165171 kernel: TERM=linux Mar 11 01:23:01.165181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:23:01.165192 systemd[1]: Detected virtualization microsoft. Mar 11 01:23:01.165200 systemd[1]: Detected architecture arm64. Mar 11 01:23:01.165207 systemd[1]: Running in initrd. Mar 11 01:23:01.165215 systemd[1]: No hostname configured, using default hostname. Mar 11 01:23:01.165222 systemd[1]: Hostname set to . Mar 11 01:23:01.165231 systemd[1]: Initializing machine ID from random generator. Mar 11 01:23:01.165240 systemd[1]: Queued start job for default target initrd.target. Mar 11 01:23:01.165248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:01.165256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:01.165265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 01:23:01.165273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:23:01.165281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 01:23:01.165289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 01:23:01.165299 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 01:23:01.165309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 01:23:01.165382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:01.165392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:01.165400 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:23:01.165408 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:23:01.165416 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:23:01.165424 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:23:01.165432 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:01.165442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:01.165451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 01:23:01.165458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 01:23:01.165466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:01.165474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:01.165482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:01.165490 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:23:01.165498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 01:23:01.165508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:23:01.165516 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 01:23:01.165524 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 01:23:01.165532 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:23:01.165558 systemd-journald[217]: Collecting audit messages is disabled. Mar 11 01:23:01.165580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:23:01.165588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:01.165596 systemd-journald[217]: Journal started Mar 11 01:23:01.165615 systemd-journald[217]: Runtime Journal (/run/log/journal/d43bf85347104897bff76f46285c5a94) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:23:01.170482 systemd-modules-load[218]: Inserted module 'overlay' Mar 11 01:23:01.199371 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:23:01.199418 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 01:23:01.206383 kernel: Bridge firewalling registered Mar 11 01:23:01.204180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:01.206589 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 11 01:23:01.213335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:01.221914 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 01:23:01.229026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:01.239275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:01.258676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:01.271480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:23:01.283472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:23:01.299266 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:23:01.312881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:01.323827 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:01.333037 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:01.344758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:01.360615 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 01:23:01.367468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:23:01.384296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:23:01.398603 dracut-cmdline[250]: dracut-dracut-053 Mar 11 01:23:01.404493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:01.416893 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:23:01.414853 systemd-resolved[252]: Positive Trust Anchors: Mar 11 01:23:01.414863 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:23:01.414895 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:23:01.417296 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 11 01:23:01.418097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:23:01.445833 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:01.559341 kernel: SCSI subsystem initialized Mar 11 01:23:01.566327 kernel: Loading iSCSI transport class v2.0-870. Mar 11 01:23:01.576336 kernel: iscsi: registered transport (tcp) Mar 11 01:23:01.592412 kernel: iscsi: registered transport (qla4xxx) Mar 11 01:23:01.592470 kernel: QLogic iSCSI HBA Driver Mar 11 01:23:01.625884 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:01.636550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 01:23:01.664233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 01:23:01.664282 kernel: device-mapper: uevent: version 1.0.3 Mar 11 01:23:01.669327 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 01:23:01.715331 kernel: raid6: neonx8 gen() 15821 MB/s Mar 11 01:23:01.734321 kernel: raid6: neonx4 gen() 15700 MB/s Mar 11 01:23:01.753321 kernel: raid6: neonx2 gen() 13230 MB/s Mar 11 01:23:01.773321 kernel: raid6: neonx1 gen() 10483 MB/s Mar 11 01:23:01.792325 kernel: raid6: int64x8 gen() 6979 MB/s Mar 11 01:23:01.811324 kernel: raid6: int64x4 gen() 7372 MB/s Mar 11 01:23:01.831321 kernel: raid6: int64x2 gen() 6146 MB/s Mar 11 01:23:01.853395 kernel: raid6: int64x1 gen() 5068 MB/s Mar 11 01:23:01.853414 kernel: raid6: using algorithm neonx8 gen() 15821 MB/s Mar 11 01:23:01.875964 kernel: raid6: .... xor() 11989 MB/s, rmw enabled Mar 11 01:23:01.875983 kernel: raid6: using neon recovery algorithm Mar 11 01:23:01.886375 kernel: xor: measuring software checksum speed Mar 11 01:23:01.886388 kernel: 8regs : 19778 MB/sec Mar 11 01:23:01.889070 kernel: 32regs : 19627 MB/sec Mar 11 01:23:01.895194 kernel: arm64_neon : 26238 MB/sec Mar 11 01:23:01.895204 kernel: xor: using function: arm64_neon (26238 MB/sec) Mar 11 01:23:01.945369 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 01:23:01.954443 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:01.968448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:01.987375 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 11 01:23:01.991851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:02.006496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 01:23:02.026578 dracut-pre-trigger[453]: rd.md=0: removing MD RAID activation Mar 11 01:23:02.054552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:02.073538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:23:02.109882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:02.127457 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 01:23:02.139536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:02.154413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:02.166636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:02.176711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:23:02.194538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 01:23:02.212834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:02.221604 kernel: hv_vmbus: Vmbus version:5.3 Mar 11 01:23:02.227895 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:23:02.232176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:02.250004 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:02.260830 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 11 01:23:02.260851 kernel: hv_vmbus: registering driver hid_hyperv Mar 11 01:23:02.260861 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 11 01:23:02.265869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:02.284516 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 11 01:23:02.284536 kernel: hv_vmbus: registering driver hv_storvsc Mar 11 01:23:02.284549 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 11 01:23:02.269998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.293946 kernel: scsi host1: storvsc_host_t Mar 11 01:23:02.296150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.344491 kernel: scsi host0: storvsc_host_t Mar 11 01:23:02.344664 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 11 01:23:02.344676 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 11 01:23:02.344779 kernel: PTP clock support registered Mar 11 01:23:02.344790 kernel: hv_vmbus: registering driver hv_netvsc Mar 11 01:23:02.344799 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 11 01:23:02.344894 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 11 01:23:02.337652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.365127 kernel: hv_utils: Registering HyperV Utility Driver Mar 11 01:23:02.365148 kernel: hv_vmbus: registering driver hv_utils Mar 11 01:23:02.357544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:02.377043 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 11 01:23:02.377216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 11 01:23:02.357630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.768890 kernel: hv_utils: Shutdown IC version 3.2 Mar 11 01:23:02.768911 kernel: hv_utils: TimeSync IC version 4.0 Mar 11 01:23:02.768921 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 01:23:02.768930 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 11 01:23:02.744466 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 11 01:23:02.790960 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 11 01:23:02.793467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#307 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:23:02.793628 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 11 01:23:02.793724 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 11 01:23:02.793809 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 11 01:23:02.793891 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 11 01:23:02.763599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:02.812603 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: VF slot 1 added Mar 11 01:23:02.812744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:02.818397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:02.845514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 11 01:23:02.845715 kernel: hv_vmbus: registering driver hv_pci Mar 11 01:23:02.845726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:23:02.845848 kernel: hv_pci c40579f7-27a7-4a74-bded-df18ecf0c4c8: PCI VMBus probing: Using version 0x10004 Mar 11 01:23:02.848581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:23:02.871759 kernel: hv_pci c40579f7-27a7-4a74-bded-df18ecf0c4c8: PCI host bridge to bus 27a7:00 Mar 11 01:23:02.872039 kernel: pci_bus 27a7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 11 01:23:02.872252 kernel: pci_bus 27a7:00: No busn resource found for root bus, will use [bus 00-ff] Mar 11 01:23:02.872390 kernel: pci 27a7:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 11 01:23:02.872476 kernel: pci 27a7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:23:02.872498 kernel: pci 27a7:00:02.0: enabling Extended Tags Mar 11 01:23:02.872513 kernel: pci 27a7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 27a7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 11 01:23:02.906085 kernel: pci_bus 27a7:00: busn_res: [bus 00-ff] end is updated to 00 Mar 11 01:23:02.906262 kernel: pci 27a7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:23:02.923463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:02.951998 kernel: mlx5_core 27a7:00:02.0: enabling device (0000 -> 0002) Mar 11 01:23:02.957442 kernel: mlx5_core 27a7:00:02.0: firmware version: 16.30.5026 Mar 11 01:23:03.153909 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: VF registering: eth1 Mar 11 01:23:03.154099 kernel: mlx5_core 27a7:00:02.0 eth1: joined to eth0 Mar 11 01:23:03.160545 kernel: mlx5_core 27a7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 11 01:23:03.170455 kernel: mlx5_core 27a7:00:02.0 enP10151s1: renamed from eth1 Mar 11 01:23:03.363463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 11 01:23:03.374009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 11 01:23:03.425180 kernel: BTRFS: device fsid 6268782d-ce1a-4049-a9c9-846620fa6ee9 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (497) Mar 11 01:23:03.437818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 11 01:23:03.443112 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 11 01:23:03.466502 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Mar 11 01:23:03.474609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 01:23:03.491695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:23:03.506453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:03.515445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:04.525511 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:23:04.525674 disk-uuid[611]: The operation has completed successfully. Mar 11 01:23:04.587053 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 01:23:04.587147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 01:23:04.625613 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 01:23:04.635520 sh[697]: Success Mar 11 01:23:04.664458 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 11 01:23:04.885301 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 01:23:04.890620 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 01:23:04.903552 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 01:23:04.930469 kernel: BTRFS info (device dm-0): first mount of filesystem 6268782d-ce1a-4049-a9c9-846620fa6ee9 Mar 11 01:23:04.930528 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:04.935594 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 01:23:04.939666 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 01:23:04.942863 kernel: BTRFS info (device dm-0): using free space tree Mar 11 01:23:05.235830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 01:23:05.240251 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 01:23:05.256697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 01:23:05.261572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 01:23:05.295525 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:05.295565 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:05.300186 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:05.336389 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:05.345108 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 01:23:05.354340 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:05.359166 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 01:23:05.373615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 01:23:05.378531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:05.389614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:23:05.426924 systemd-networkd[881]: lo: Link UP Mar 11 01:23:05.426935 systemd-networkd[881]: lo: Gained carrier Mar 11 01:23:05.428813 systemd-networkd[881]: Enumeration completed Mar 11 01:23:05.429523 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:23:05.434311 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:05.434315 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:23:05.435245 systemd[1]: Reached target network.target - Network. Mar 11 01:23:05.514452 kernel: mlx5_core 27a7:00:02.0 enP10151s1: Link up Mar 11 01:23:05.554448 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: Data path switched to VF: enP10151s1 Mar 11 01:23:05.554909 systemd-networkd[881]: enP10151s1: Link UP Mar 11 01:23:05.555028 systemd-networkd[881]: eth0: Link UP Mar 11 01:23:05.555128 systemd-networkd[881]: eth0: Gained carrier Mar 11 01:23:05.555137 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:05.572911 systemd-networkd[881]: enP10151s1: Gained carrier Mar 11 01:23:05.582484 systemd-networkd[881]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:23:06.131624 ignition[876]: Ignition 2.19.0 Mar 11 01:23:06.131635 ignition[876]: Stage: fetch-offline Mar 11 01:23:06.133089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:06.131670 ignition[876]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.131678 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.131775 ignition[876]: parsed url from cmdline: "" Mar 11 01:23:06.131778 ignition[876]: no config URL provided Mar 11 01:23:06.131783 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:23:06.156667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 11 01:23:06.131790 ignition[876]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:23:06.131795 ignition[876]: failed to fetch config: resource requires networking Mar 11 01:23:06.131957 ignition[876]: Ignition finished successfully Mar 11 01:23:06.176953 ignition[891]: Ignition 2.19.0 Mar 11 01:23:06.176960 ignition[891]: Stage: fetch Mar 11 01:23:06.177161 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.177173 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.177273 ignition[891]: parsed url from cmdline: "" Mar 11 01:23:06.177277 ignition[891]: no config URL provided Mar 11 01:23:06.177281 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:23:06.177288 ignition[891]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:23:06.177313 ignition[891]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 11 01:23:06.286465 ignition[891]: GET result: OK Mar 11 01:23:06.287359 ignition[891]: config has been read from IMDS userdata Mar 11 01:23:06.287403 ignition[891]: parsing config with SHA512: 88857634428ae7527be7492a81b254df9662704dadc90e3795067001991601853d715aaf0e8b2a576fc1573060550cbed4299309add146d75b91e85524b8cfba Mar 11 01:23:06.291413 unknown[891]: fetched base config from "system" Mar 11 01:23:06.291901 ignition[891]: fetch: fetch complete Mar 11 01:23:06.291420 unknown[891]: fetched base config from "system" Mar 11 01:23:06.291906 ignition[891]: fetch: fetch passed Mar 11 01:23:06.291426 unknown[891]: fetched user config from "azure" Mar 11 01:23:06.291958 ignition[891]: Ignition finished successfully Mar 11 01:23:06.294275 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 11 01:23:06.313683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 01:23:06.329702 ignition[897]: Ignition 2.19.0 Mar 11 01:23:06.329710 ignition[897]: Stage: kargs Mar 11 01:23:06.333553 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 01:23:06.329886 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.329902 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.330872 ignition[897]: kargs: kargs passed Mar 11 01:23:06.347621 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 01:23:06.330914 ignition[897]: Ignition finished successfully Mar 11 01:23:06.368848 ignition[903]: Ignition 2.19.0 Mar 11 01:23:06.368861 ignition[903]: Stage: disks Mar 11 01:23:06.369020 ignition[903]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:06.374392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 01:23:06.369030 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:06.378767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:06.369943 ignition[903]: disks: disks passed Mar 11 01:23:06.386601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 01:23:06.369989 ignition[903]: Ignition finished successfully Mar 11 01:23:06.395252 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:23:06.403589 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:23:06.412198 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:23:06.430632 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 01:23:06.503086 systemd-fsck[912]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 11 01:23:06.511907 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 01:23:06.528599 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 01:23:06.583457 kernel: EXT4-fs (sda9): mounted filesystem 19488164-8e25-4d6a-86d9-f70a8ed432cb r/w with ordered data mode. Quota mode: none. Mar 11 01:23:06.583668 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 01:23:06.587637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 01:23:06.626495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:06.644441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (923) Mar 11 01:23:06.656204 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:06.656253 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:06.659797 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:06.662536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 01:23:06.673820 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 11 01:23:06.685845 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:06.680130 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 01:23:06.680164 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:06.696892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:06.703315 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 01:23:06.714622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 01:23:07.234542 systemd-networkd[881]: eth0: Gained IPv6LL Mar 11 01:23:07.253359 coreos-metadata[938]: Mar 11 01:23:07.253 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:23:07.261254 coreos-metadata[938]: Mar 11 01:23:07.261 INFO Fetch successful Mar 11 01:23:07.265242 coreos-metadata[938]: Mar 11 01:23:07.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:23:07.283233 coreos-metadata[938]: Mar 11 01:23:07.283 INFO Fetch successful Mar 11 01:23:07.297486 coreos-metadata[938]: Mar 11 01:23:07.297 INFO wrote hostname ci-4081.3.6-n-541af3988c to /sysroot/etc/hostname Mar 11 01:23:07.305559 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:23:07.533734 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 01:23:07.555077 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Mar 11 01:23:07.575796 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 01:23:07.582657 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 01:23:08.564706 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:08.580585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 01:23:08.586631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 01:23:08.605110 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 01:23:08.609001 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:08.628651 ignition[1042]: INFO : Ignition 2.19.0 Mar 11 01:23:08.632571 ignition[1042]: INFO : Stage: mount Mar 11 01:23:08.632571 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:08.632571 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:08.645623 ignition[1042]: INFO : mount: mount passed Mar 11 01:23:08.645623 ignition[1042]: INFO : Ignition finished successfully Mar 11 01:23:08.645669 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 01:23:08.663611 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 01:23:08.673111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 01:23:08.682127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:23:08.708457 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1053) Mar 11 01:23:08.708514 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:23:08.713022 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:23:08.716280 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:23:08.722438 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:23:08.724156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:23:08.747630 ignition[1070]: INFO : Ignition 2.19.0 Mar 11 01:23:08.747630 ignition[1070]: INFO : Stage: files Mar 11 01:23:08.753799 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:08.753799 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:08.753799 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Mar 11 01:23:08.753799 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 01:23:08.753799 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 01:23:08.849881 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 01:23:08.856006 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 01:23:08.856006 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 01:23:08.850283 unknown[1070]: wrote ssh authorized keys file for user: core Mar 11 01:23:08.871876 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:23:08.871876 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 11 01:23:08.905243 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 01:23:09.086118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:09.094118 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 11 01:23:09.491126 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 01:23:10.385656 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 11 01:23:10.385656 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 01:23:10.402335 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:23:10.411675 ignition[1070]: INFO : files: files passed Mar 11 01:23:10.411675 ignition[1070]: INFO : Ignition finished successfully Mar 11 01:23:10.410676 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 01:23:10.442675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 01:23:10.456601 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 01:23:10.467055 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 01:23:10.467206 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 01:23:10.507692 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.507692 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.526728 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:23:10.510080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:10.520362 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 01:23:10.548674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 01:23:10.584311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 01:23:10.584454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 01:23:10.593631 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 01:23:10.602839 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 01:23:10.611013 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 01:23:10.625611 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 01:23:10.643949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:10.655686 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 01:23:10.669871 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:10.674807 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:10.684562 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 01:23:10.693030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 01:23:10.693153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:23:10.705224 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 01:23:10.709764 systemd[1]: Stopped target basic.target - Basic System. Mar 11 01:23:10.718309 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 01:23:10.726837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:23:10.735400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 01:23:10.744951 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 01:23:10.753966 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:23:10.763615 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 01:23:10.772388 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 01:23:10.781668 systemd[1]: Stopped target swap.target - Swaps. Mar 11 01:23:10.789710 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 01:23:10.789831 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:23:10.801740 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:10.806389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:10.815237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 01:23:10.815307 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:10.824858 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 01:23:10.824972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 01:23:10.838492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 01:23:10.838613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:23:10.844432 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 01:23:10.844524 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 01:23:10.898286 ignition[1122]: INFO : Ignition 2.19.0 Mar 11 01:23:10.898286 ignition[1122]: INFO : Stage: umount Mar 11 01:23:10.852626 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 11 01:23:10.924392 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:23:10.924392 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:23:10.924392 ignition[1122]: INFO : umount: umount passed Mar 11 01:23:10.924392 ignition[1122]: INFO : Ignition finished successfully Mar 11 01:23:10.852715 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:23:10.876665 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 01:23:10.898631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 01:23:10.903194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 01:23:10.903369 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:10.909634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 01:23:10.909735 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:23:10.921289 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 01:23:10.921393 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 01:23:10.930006 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 01:23:10.930254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 01:23:10.945722 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 01:23:10.945790 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 01:23:10.953082 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 11 01:23:10.953123 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 11 01:23:10.957597 systemd[1]: Stopped target network.target - Network. Mar 11 01:23:10.969067 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 01:23:10.969132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:23:10.974584 systemd[1]: Stopped target paths.target - Path Units. Mar 11 01:23:10.983629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 01:23:10.987460 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:10.993026 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 01:23:11.000590 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 01:23:11.011045 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 01:23:11.011107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:23:11.020910 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 01:23:11.020963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:23:11.029331 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 01:23:11.029381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 01:23:11.037879 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 01:23:11.037917 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 01:23:11.046180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 01:23:11.235740 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: Data path switched from VF: enP10151s1 Mar 11 01:23:11.055832 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 01:23:11.065138 systemd-networkd[881]: eth0: DHCPv6 lease lost Mar 11 01:23:11.066768 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 01:23:11.068019 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 01:23:11.068112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 01:23:11.075731 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 01:23:11.075822 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 01:23:11.086220 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 01:23:11.086285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:11.107631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 01:23:11.115985 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 01:23:11.116071 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:23:11.126191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:11.138732 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 01:23:11.139094 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 01:23:11.171703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 01:23:11.171804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:11.178813 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 01:23:11.178874 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:11.187808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 01:23:11.187849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:11.197324 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 01:23:11.197474 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:11.206244 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 01:23:11.206284 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:11.215690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 01:23:11.215725 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:11.231384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 01:23:11.231480 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:23:11.244767 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 01:23:11.244809 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 01:23:11.255736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:23:11.255785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:23:11.283731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 01:23:11.296507 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 01:23:11.296574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:11.317149 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 01:23:11.317208 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:11.323076 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 01:23:11.323116 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:11.333715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:11.333755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:11.343010 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 01:23:11.343115 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 01:23:11.351878 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 01:23:11.351967 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 01:23:11.561152 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 01:23:11.561308 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 01:23:11.569058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 01:23:11.580838 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 01:23:11.580903 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 01:23:11.595714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 01:23:11.606950 systemd[1]: Switching root. Mar 11 01:23:11.697426 systemd-journald[217]: Journal stopped Mar 11 01:23:15.810643 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 11 01:23:15.810668 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 01:23:15.810679 kernel: SELinux: policy capability open_perms=1 Mar 11 01:23:15.810689 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 01:23:15.810697 kernel: SELinux: policy capability always_check_network=0 Mar 11 01:23:15.810705 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 01:23:15.810714 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 01:23:15.810722 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 01:23:15.810730 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 01:23:15.810738 kernel: audit: type=1403 audit(1773192192.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 01:23:15.810749 systemd[1]: Successfully loaded SELinux policy in 157.665ms. Mar 11 01:23:15.810758 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.267ms. Mar 11 01:23:15.810768 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:23:15.810777 systemd[1]: Detected virtualization microsoft. Mar 11 01:23:15.810787 systemd[1]: Detected architecture arm64. Mar 11 01:23:15.810798 systemd[1]: Detected first boot. Mar 11 01:23:15.810808 systemd[1]: Hostname set to . Mar 11 01:23:15.810817 systemd[1]: Initializing machine ID from random generator. Mar 11 01:23:15.810826 zram_generator::config[1164]: No configuration found. Mar 11 01:23:15.810835 systemd[1]: Populated /etc with preset unit settings. Mar 11 01:23:15.810844 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 01:23:15.810855 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 01:23:15.810865 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 01:23:15.810875 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 01:23:15.810884 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 01:23:15.810894 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 01:23:15.810903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 01:23:15.810912 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 01:23:15.810924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 01:23:15.810933 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 01:23:15.810942 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 01:23:15.810952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:23:15.810961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:23:15.810970 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 01:23:15.810980 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 01:23:15.810989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 01:23:15.810998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:23:15.811009 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 11 01:23:15.811018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:23:15.811028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 01:23:15.811039 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 01:23:15.811049 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 01:23:15.811058 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 01:23:15.811068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:23:15.811079 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:23:15.811089 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:23:15.811098 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:23:15.811108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 01:23:15.811117 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 01:23:15.811127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:23:15.811136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:23:15.811148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:23:15.811158 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 01:23:15.811167 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 01:23:15.811177 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 01:23:15.811186 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 01:23:15.811196 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 01:23:15.811207 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 01:23:15.811217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 01:23:15.811227 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 01:23:15.811236 systemd[1]: Reached target machines.target - Containers. Mar 11 01:23:15.811246 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 01:23:15.811256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:15.811265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:23:15.811276 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 01:23:15.811287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:15.811297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:23:15.811307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:15.811317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 01:23:15.811326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:15.811336 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 01:23:15.811346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 01:23:15.811355 kernel: fuse: init (API version 7.39) Mar 11 01:23:15.811364 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 01:23:15.811375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 01:23:15.811385 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 01:23:15.811394 kernel: loop: module loaded Mar 11 01:23:15.811403 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:23:15.811412 kernel: ACPI: bus type drm_connector registered Mar 11 01:23:15.811440 systemd-journald[1267]: Collecting audit messages is disabled. Mar 11 01:23:15.811461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:23:15.811472 systemd-journald[1267]: Journal started Mar 11 01:23:15.811491 systemd-journald[1267]: Runtime Journal (/run/log/journal/1864fdc551554b4a86669b7444787108) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:23:14.969121 systemd[1]: Queued start job for default target multi-user.target. Mar 11 01:23:15.120383 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 11 01:23:15.120710 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 01:23:15.121011 systemd[1]: systemd-journald.service: Consumed 2.390s CPU time. Mar 11 01:23:15.821116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 01:23:15.836469 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 01:23:15.848851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:23:15.858713 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 01:23:15.858772 systemd[1]: Stopped verity-setup.service. Mar 11 01:23:15.875457 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:23:15.875315 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 01:23:15.879768 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 01:23:15.884462 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 01:23:15.888814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 01:23:15.893492 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 01:23:15.898174 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 01:23:15.904531 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 01:23:15.909667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:23:15.915041 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 01:23:15.915179 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 01:23:15.920506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:15.920642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:15.925730 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:23:15.925862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:23:15.930525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:15.930650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:15.936156 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 01:23:15.936283 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 01:23:15.941102 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:15.941229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:15.946009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:23:15.951373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 01:23:15.956888 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 01:23:15.962299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:23:15.975842 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 01:23:15.989514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 01:23:15.995095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 01:23:16.000230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 01:23:16.000266 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:23:16.005800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 01:23:16.012233 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 01:23:16.017841 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 01:23:16.022054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:16.022981 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 01:23:16.028422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 01:23:16.033170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:23:16.034694 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 01:23:16.039362 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:23:16.040426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:23:16.046592 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 01:23:16.062686 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:23:16.073410 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 01:23:16.082036 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 01:23:16.088787 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 01:23:16.095468 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 01:23:16.099039 systemd-journald[1267]: Time spent on flushing to /var/log/journal/1864fdc551554b4a86669b7444787108 is 28.998ms for 897 entries. Mar 11 01:23:16.099039 systemd-journald[1267]: System Journal (/var/log/journal/1864fdc551554b4a86669b7444787108) is 8.0M, max 2.6G, 2.6G free. Mar 11 01:23:16.177636 systemd-journald[1267]: Received client request to flush runtime journal. Mar 11 01:23:16.177691 kernel: loop0: detected capacity change from 0 to 114328 Mar 11 01:23:16.110876 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 01:23:16.120281 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 11 01:23:16.121194 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 01:23:16.137624 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 01:23:16.162342 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Mar 11 01:23:16.162353 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Mar 11 01:23:16.167762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:23:16.181667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 01:23:16.189726 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 01:23:16.195201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:23:16.215815 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 01:23:16.216409 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 01:23:16.301504 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 01:23:16.312613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:23:16.325624 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 11 01:23:16.325638 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Mar 11 01:23:16.329242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:23:16.445508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 01:23:16.495452 kernel: loop1: detected capacity change from 0 to 31320 Mar 11 01:23:16.759059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 01:23:16.768617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:23:16.794086 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Mar 11 01:23:16.870452 kernel: loop2: detected capacity change from 0 to 114432 Mar 11 01:23:16.916989 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:23:16.932589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:23:16.981615 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 01:23:16.987992 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 11 01:23:17.055451 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 01:23:17.056581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 01:23:17.087255 kernel: hv_vmbus: registering driver hv_balloon Mar 11 01:23:17.087395 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 11 01:23:17.093234 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 11 01:23:17.117416 kernel: hv_vmbus: registering driver hyperv_fb Mar 11 01:23:17.117507 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 11 01:23:17.129863 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 11 01:23:17.130795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:23:17.131595 kernel: Console: switching to colour dummy device 80x25 Mar 11 01:23:17.143033 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:23:17.156686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:17.171258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:23:17.172003 systemd-networkd[1335]: lo: Link UP Mar 11 01:23:17.172012 systemd-networkd[1335]: lo: Gained carrier Mar 11 01:23:17.173726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:17.174938 systemd-networkd[1335]: Enumeration completed Mar 11 01:23:17.177546 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:17.177550 systemd-networkd[1335]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:23:17.178842 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:23:17.190839 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 01:23:17.202016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:23:17.230449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1333) Mar 11 01:23:17.230531 kernel: mlx5_core 27a7:00:02.0 enP10151s1: Link up Mar 11 01:23:17.256860 kernel: hv_netvsc 7ced8d89-b939-7ced-8d89-b9397ced8d89 eth0: Data path switched to VF: enP10151s1 Mar 11 01:23:17.259927 systemd-networkd[1335]: enP10151s1: Link UP Mar 11 01:23:17.260793 systemd-networkd[1335]: eth0: Link UP Mar 11 01:23:17.260872 systemd-networkd[1335]: eth0: Gained carrier Mar 11 01:23:17.260938 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:17.265461 kernel: loop3: detected capacity change from 0 to 197488 Mar 11 01:23:17.273539 systemd-networkd[1335]: enP10151s1: Gained carrier Mar 11 01:23:17.279477 systemd-networkd[1335]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:23:17.286725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:23:17.297450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 01:23:17.313450 kernel: loop4: detected capacity change from 0 to 114328 Mar 11 01:23:17.332458 kernel: loop5: detected capacity change from 0 to 31320 Mar 11 01:23:17.339741 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 01:23:17.354450 kernel: loop6: detected capacity change from 0 to 114432 Mar 11 01:23:17.366480 kernel: loop7: detected capacity change from 0 to 197488 Mar 11 01:23:17.377987 (sd-merge)[1415]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 11 01:23:17.379276 (sd-merge)[1415]: Merged extensions into '/usr'. Mar 11 01:23:17.382380 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 01:23:17.382398 systemd[1]: Reloading... Mar 11 01:23:17.458491 zram_generator::config[1463]: No configuration found. Mar 11 01:23:17.557630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:17.629794 systemd[1]: Reloading finished in 246 ms. Mar 11 01:23:17.655534 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 01:23:17.668599 systemd[1]: Starting ensure-sysext.service... Mar 11 01:23:17.673582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:23:17.681068 systemd[1]: Reloading requested from client PID 1504 ('systemctl') (unit ensure-sysext.service)... Mar 11 01:23:17.681078 systemd[1]: Reloading... Mar 11 01:23:17.722184 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 01:23:17.723482 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 01:23:17.724240 systemd-tmpfiles[1505]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 01:23:17.724577 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Mar 11 01:23:17.726606 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Mar 11 01:23:17.729903 systemd-tmpfiles[1505]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:23:17.729997 systemd-tmpfiles[1505]: Skipping /boot Mar 11 01:23:17.738277 systemd-tmpfiles[1505]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:23:17.738370 systemd-tmpfiles[1505]: Skipping /boot Mar 11 01:23:17.772451 zram_generator::config[1536]: No configuration found. Mar 11 01:23:17.873579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:17.948150 systemd[1]: Reloading finished in 266 ms. Mar 11 01:23:17.968558 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 01:23:17.983014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:23:17.998669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 01:23:18.023671 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 01:23:18.029530 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 01:23:18.035622 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 01:23:18.044616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:23:18.060836 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 01:23:18.066699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:23:18.076149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:18.083352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:18.091246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:18.112929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:18.118672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:18.119844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:18.120014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:18.125089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:18.125225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:18.130992 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:18.131108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:18.141774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 01:23:18.146787 lvm[1599]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 01:23:18.149304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:18.156630 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:18.166918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:18.177657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:18.183018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:18.183871 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 01:23:18.190624 systemd-resolved[1604]: Positive Trust Anchors: Mar 11 01:23:18.190808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:18.191104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:18.191507 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:23:18.191541 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:23:18.196389 systemd-resolved[1604]: Using system hostname 'ci-4081.3.6-n-541af3988c'. Mar 11 01:23:18.196939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:18.197066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:18.202804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:23:18.208204 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:18.208344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:18.218782 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 01:23:18.225703 augenrules[1628]: No rules Mar 11 01:23:18.226847 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 01:23:18.232953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:23:18.238046 systemd[1]: Reached target network.target - Network. Mar 11 01:23:18.242176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:23:18.247650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:23:18.255614 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 01:23:18.262384 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 01:23:18.263613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:23:18.272597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:23:18.279590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:23:18.286215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:23:18.290518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:23:18.290579 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 01:23:18.295460 systemd[1]: Finished ensure-sysext.service. Mar 11 01:23:18.299028 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 01:23:18.305292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:23:18.305440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:23:18.310289 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:23:18.310411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:23:18.315237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:23:18.315354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:23:18.320996 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:23:18.321121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:23:18.329677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:23:18.329763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:23:18.600813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 01:23:18.607443 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 01:23:18.818536 systemd-networkd[1335]: eth0: Gained IPv6LL Mar 11 01:23:18.823455 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 01:23:18.829293 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 01:23:20.818466 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 01:23:20.832384 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 01:23:20.849596 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 01:23:20.858022 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 01:23:20.862965 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:23:20.867518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 01:23:20.872743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 01:23:20.878382 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 01:23:20.883186 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 01:23:20.888653 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 01:23:20.894419 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 01:23:20.894477 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:23:20.898137 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:23:20.904476 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 01:23:20.910493 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 01:23:20.917939 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 01:23:20.922878 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 01:23:20.927404 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:23:20.931425 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:23:20.935395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:23:20.935421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:23:20.946542 systemd[1]: Starting chronyd.service - NTP client/server... Mar 11 01:23:20.953558 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 01:23:20.964263 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 11 01:23:20.969574 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 01:23:20.978013 (chronyd)[1657]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 11 01:23:20.980826 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 01:23:20.987618 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 01:23:20.988710 jq[1661]: false Mar 11 01:23:20.991816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 01:23:20.991852 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 11 01:23:20.993624 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 11 01:23:21.000849 chronyd[1668]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 11 01:23:21.001963 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 11 01:23:21.004575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:21.005294 KVP[1665]: KVP starting; pid is:1665 Mar 11 01:23:21.010585 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 01:23:21.021359 KVP[1665]: KVP LIC Version: 3.1 Mar 11 01:23:21.021502 kernel: hv_utils: KVP IC version 4.0 Mar 11 01:23:21.022615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 01:23:21.030575 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 01:23:21.031689 chronyd[1668]: Timezone right/UTC failed leap second check, ignoring Mar 11 01:23:21.037394 chronyd[1668]: Loaded seccomp filter (level 2) Mar 11 01:23:21.039513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 01:23:21.048548 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 01:23:21.055524 extend-filesystems[1664]: Found loop4 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found loop5 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found loop6 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found loop7 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda1 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda2 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda3 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found usr Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda4 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda6 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda7 Mar 11 01:23:21.055524 extend-filesystems[1664]: Found sda9 Mar 11 01:23:21.055524 extend-filesystems[1664]: Checking size of /dev/sda9 Mar 11 01:23:21.066028 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 01:23:21.186037 dbus-daemon[1660]: [system] SELinux support is enabled Mar 11 01:23:21.213976 extend-filesystems[1664]: Old size kept for /dev/sda9 Mar 11 01:23:21.213976 extend-filesystems[1664]: Found sr0 Mar 11 01:23:21.075578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 01:23:21.080720 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 01:23:21.081571 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 01:23:21.242034 update_engine[1685]: I20260311 01:23:21.187983 1685 main.cc:92] Flatcar Update Engine starting Mar 11 01:23:21.242034 update_engine[1685]: I20260311 01:23:21.197005 1685 update_check_scheduler.cc:74] Next update check in 4m30s Mar 11 01:23:21.098558 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 01:23:21.242356 jq[1687]: true Mar 11 01:23:21.115556 systemd[1]: Started chronyd.service - NTP client/server. Mar 11 01:23:21.139339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 01:23:21.140485 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 01:23:21.140778 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 01:23:21.140913 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 01:23:21.160783 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 01:23:21.160967 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 01:23:21.171029 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 01:23:21.194635 systemd-logind[1683]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 11 01:23:21.194646 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 01:23:21.196774 systemd-logind[1683]: New seat seat0. Mar 11 01:23:21.209512 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 01:23:21.238792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 01:23:21.238995 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 01:23:21.268006 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 01:23:21.268049 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 01:23:21.270560 (ntainerd)[1710]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 01:23:21.277611 coreos-metadata[1659]: Mar 11 01:23:21.275 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:23:21.275160 dbus-daemon[1660]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 11 01:23:21.277586 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 01:23:21.277605 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 01:23:21.284888 systemd[1]: Started update-engine.service - Update Engine. Mar 11 01:23:21.290007 coreos-metadata[1659]: Mar 11 01:23:21.289 INFO Fetch successful Mar 11 01:23:21.290007 coreos-metadata[1659]: Mar 11 01:23:21.289 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 11 01:23:21.291994 tar[1704]: linux-arm64/LICENSE Mar 11 01:23:21.292703 tar[1704]: linux-arm64/helm Mar 11 01:23:21.295276 jq[1709]: true Mar 11 01:23:21.296888 coreos-metadata[1659]: Mar 11 01:23:21.296 INFO Fetch successful Mar 11 01:23:21.296888 coreos-metadata[1659]: Mar 11 01:23:21.296 INFO Fetching http://168.63.129.16/machine/ed113acc-585e-4f37-b220-7aa1d6a0a8b1/d017eb9c%2D156a%2D4843%2Da569%2Dc943d492f6df.%5Fci%2D4081.3.6%2Dn%2D541af3988c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 11 01:23:21.298349 coreos-metadata[1659]: Mar 11 01:23:21.298 INFO Fetch successful Mar 11 01:23:21.298349 coreos-metadata[1659]: Mar 11 01:23:21.298 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:23:21.301692 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 01:23:21.309920 coreos-metadata[1659]: Mar 11 01:23:21.309 INFO Fetch successful Mar 11 01:23:21.378504 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1700) Mar 11 01:23:21.385493 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 11 01:23:21.396051 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 01:23:21.446097 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Mar 11 01:23:21.451760 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 01:23:21.460303 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 01:23:21.581532 locksmithd[1725]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 01:23:21.938938 tar[1704]: linux-arm64/README.md Mar 11 01:23:21.952719 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 01:23:22.062451 containerd[1710]: time="2026-03-11T01:23:22.060911440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 01:23:22.115246 containerd[1710]: time="2026-03-11T01:23:22.115198760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118551080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118585920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118601040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118751640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118768920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118831480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.118843720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.119005680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.119021080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.119033480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119730 containerd[1710]: time="2026-03-11T01:23:22.119042640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119110880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119294800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119395560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119409960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119513120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 01:23:22.119970 containerd[1710]: time="2026-03-11T01:23:22.119556200Z" level=info msg="metadata content store policy set" policy=shared Mar 11 01:23:22.138307 containerd[1710]: time="2026-03-11T01:23:22.138269720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 01:23:22.138472 containerd[1710]: time="2026-03-11T01:23:22.138454800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 01:23:22.138544 containerd[1710]: time="2026-03-11T01:23:22.138530680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 01:23:22.138679 containerd[1710]: time="2026-03-11T01:23:22.138665800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 01:23:22.138743 containerd[1710]: time="2026-03-11T01:23:22.138730800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 01:23:22.139688 containerd[1710]: time="2026-03-11T01:23:22.139659320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 01:23:22.139934 containerd[1710]: time="2026-03-11T01:23:22.139915320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 01:23:22.140043 containerd[1710]: time="2026-03-11T01:23:22.140024480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 01:23:22.140067 containerd[1710]: time="2026-03-11T01:23:22.140045880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 01:23:22.140067 containerd[1710]: time="2026-03-11T01:23:22.140060160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 01:23:22.140106 containerd[1710]: time="2026-03-11T01:23:22.140074320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140106 containerd[1710]: time="2026-03-11T01:23:22.140087360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140106 containerd[1710]: time="2026-03-11T01:23:22.140100640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140116560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140132080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140146120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140160040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140171480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 01:23:22.140202 containerd[1710]: time="2026-03-11T01:23:22.140192160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140206480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140218920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140232240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140244440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140257040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140268920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140281320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140303 containerd[1710]: time="2026-03-11T01:23:22.140294160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140308520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140320840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140334760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140347520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140363320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140384760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140396720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140473 containerd[1710]: time="2026-03-11T01:23:22.140407160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140499080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140520320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140531200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140543320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140552880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140566840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140576520Z" level=info msg="NRI interface is disabled by configuration." Mar 11 01:23:22.140617 containerd[1710]: time="2026-03-11T01:23:22.140587560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 01:23:22.140918 containerd[1710]: time="2026-03-11T01:23:22.140861600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 01:23:22.141019 containerd[1710]: time="2026-03-11T01:23:22.140924800Z" level=info msg="Connect containerd service" Mar 11 01:23:22.141019 containerd[1710]: time="2026-03-11T01:23:22.140961200Z" level=info msg="using legacy CRI server" Mar 11 01:23:22.141019 containerd[1710]: time="2026-03-11T01:23:22.140967920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 01:23:22.141074 containerd[1710]: time="2026-03-11T01:23:22.141054960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.142857440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143554240Z" level=info msg="Start subscribing containerd event" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143600160Z" level=info msg="Start recovering state" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143665520Z" level=info msg="Start event monitor" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143676280Z" level=info msg="Start snapshots syncer" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143684680Z" level=info msg="Start cni network conf syncer for default" Mar 11 01:23:22.144432 containerd[1710]: time="2026-03-11T01:23:22.143691680Z" level=info msg="Start streaming server" Mar 11 01:23:22.145652 containerd[1710]: time="2026-03-11T01:23:22.145628160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 01:23:22.145698 containerd[1710]: time="2026-03-11T01:23:22.145683920Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 01:23:22.148527 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 01:23:22.154642 containerd[1710]: time="2026-03-11T01:23:22.154610800Z" level=info msg="containerd successfully booted in 0.097971s" Mar 11 01:23:22.287628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:22.300800 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:22.510780 sshd_keygen[1691]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 01:23:22.531254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 01:23:22.542010 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 01:23:22.550663 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 11 01:23:22.556425 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 01:23:22.558262 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 01:23:22.573204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 01:23:22.588646 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 01:23:22.597336 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 01:23:22.605567 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 11 01:23:22.612197 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 01:23:22.628129 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 11 01:23:22.633626 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 01:23:22.639557 systemd[1]: Startup finished in 593ms (kernel) + 11.418s (initrd) + 10.144s (userspace) = 22.156s. Mar 11 01:23:22.716098 kubelet[1793]: E0311 01:23:22.716054 1793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:22.718845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:22.718969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:22.943570 login[1819]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 11 01:23:22.944535 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:22.953258 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 01:23:22.959630 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 01:23:22.961592 systemd-logind[1683]: New session 1 of user core. Mar 11 01:23:22.985458 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 01:23:22.990675 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 01:23:22.996216 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 01:23:23.126251 systemd[1829]: Queued start job for default target default.target. Mar 11 01:23:23.132271 systemd[1829]: Created slice app.slice - User Application Slice. Mar 11 01:23:23.132299 systemd[1829]: Reached target paths.target - Paths. Mar 11 01:23:23.132311 systemd[1829]: Reached target timers.target - Timers. Mar 11 01:23:23.133522 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 01:23:23.145003 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 01:23:23.145106 systemd[1829]: Reached target sockets.target - Sockets. Mar 11 01:23:23.145118 systemd[1829]: Reached target basic.target - Basic System. Mar 11 01:23:23.145153 systemd[1829]: Reached target default.target - Main User Target. Mar 11 01:23:23.145179 systemd[1829]: Startup finished in 143ms. Mar 11 01:23:23.145252 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 01:23:23.147665 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 01:23:23.945712 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:23.951484 systemd-logind[1683]: New session 2 of user core. Mar 11 01:23:23.955601 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 01:23:24.462814 waagent[1820]: 2026-03-11T01:23:24.458609Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 11 01:23:24.463378 waagent[1820]: 2026-03-11T01:23:24.463316Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 11 01:23:24.466878 waagent[1820]: 2026-03-11T01:23:24.466825Z INFO Daemon Daemon Python: 3.11.9 Mar 11 01:23:24.470579 waagent[1820]: 2026-03-11T01:23:24.470523Z INFO Daemon Daemon Run daemon Mar 11 01:23:24.473710 waagent[1820]: 2026-03-11T01:23:24.473668Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 11 01:23:24.480648 waagent[1820]: 2026-03-11T01:23:24.480600Z INFO Daemon Daemon Using waagent for provisioning Mar 11 01:23:24.485076 waagent[1820]: 2026-03-11T01:23:24.485031Z INFO Daemon Daemon Activate resource disk Mar 11 01:23:24.488801 waagent[1820]: 2026-03-11T01:23:24.488756Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 11 01:23:24.497994 waagent[1820]: 2026-03-11T01:23:24.497943Z INFO Daemon Daemon Found device: None Mar 11 01:23:24.501722 waagent[1820]: 2026-03-11T01:23:24.501677Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 11 01:23:24.508503 waagent[1820]: 2026-03-11T01:23:24.508457Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 11 01:23:24.519380 waagent[1820]: 2026-03-11T01:23:24.519325Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 11 01:23:24.524130 waagent[1820]: 2026-03-11T01:23:24.524083Z INFO Daemon Daemon Running default provisioning handler Mar 11 01:23:24.535886 waagent[1820]: 2026-03-11T01:23:24.535830Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 11 01:23:24.546214 waagent[1820]: 2026-03-11T01:23:24.546156Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 11 01:23:24.553462 waagent[1820]: 2026-03-11T01:23:24.553394Z INFO Daemon Daemon cloud-init is enabled: False Mar 11 01:23:24.557231 waagent[1820]: 2026-03-11T01:23:24.557184Z INFO Daemon Daemon Copying ovf-env.xml Mar 11 01:23:24.646300 waagent[1820]: 2026-03-11T01:23:24.646216Z INFO Daemon Daemon Successfully mounted dvd Mar 11 01:23:24.675499 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 11 01:23:24.677231 waagent[1820]: 2026-03-11T01:23:24.677153Z INFO Daemon Daemon Detect protocol endpoint Mar 11 01:23:24.680982 waagent[1820]: 2026-03-11T01:23:24.680932Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 11 01:23:24.685168 waagent[1820]: 2026-03-11T01:23:24.685124Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 11 01:23:24.690175 waagent[1820]: 2026-03-11T01:23:24.690134Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 11 01:23:24.694095 waagent[1820]: 2026-03-11T01:23:24.694051Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 11 01:23:24.697861 waagent[1820]: 2026-03-11T01:23:24.697812Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 11 01:23:24.725856 waagent[1820]: 2026-03-11T01:23:24.725763Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 11 01:23:24.731014 waagent[1820]: 2026-03-11T01:23:24.730985Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 11 01:23:24.735210 waagent[1820]: 2026-03-11T01:23:24.735174Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 11 01:23:24.914475 waagent[1820]: 2026-03-11T01:23:24.914202Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 11 01:23:24.919180 waagent[1820]: 2026-03-11T01:23:24.919129Z INFO Daemon Daemon Forcing an update of the goal state. Mar 11 01:23:24.926167 waagent[1820]: 2026-03-11T01:23:24.926117Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 11 01:23:24.943923 waagent[1820]: 2026-03-11T01:23:24.943878Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 11 01:23:24.948250 waagent[1820]: 2026-03-11T01:23:24.948205Z INFO Daemon Mar 11 01:23:24.950390 waagent[1820]: 2026-03-11T01:23:24.950350Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ce1bd37d-db41-4099-ba1c-f7e23403ef1a eTag: 8029725805345705222 source: Fabric] Mar 11 01:23:24.958970 waagent[1820]: 2026-03-11T01:23:24.958926Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 11 01:23:24.964096 waagent[1820]: 2026-03-11T01:23:24.964051Z INFO Daemon Mar 11 01:23:24.966213 waagent[1820]: 2026-03-11T01:23:24.966173Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 11 01:23:24.975030 waagent[1820]: 2026-03-11T01:23:24.974992Z INFO Daemon Daemon Downloading artifacts profile blob Mar 11 01:23:25.046884 waagent[1820]: 2026-03-11T01:23:25.046754Z INFO Daemon Downloaded certificate {'thumbprint': '6C7EDE9DBB1014635D501192E906268C29062EE1', 'hasPrivateKey': True} Mar 11 01:23:25.056093 waagent[1820]: 2026-03-11T01:23:25.056038Z INFO Daemon Fetch goal state completed Mar 11 01:23:25.066228 waagent[1820]: 2026-03-11T01:23:25.066184Z INFO Daemon Daemon Starting provisioning Mar 11 01:23:25.070406 waagent[1820]: 2026-03-11T01:23:25.070359Z INFO Daemon Daemon Handle ovf-env.xml. Mar 11 01:23:25.074019 waagent[1820]: 2026-03-11T01:23:25.073980Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-541af3988c] Mar 11 01:23:25.080155 waagent[1820]: 2026-03-11T01:23:25.080100Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-541af3988c] Mar 11 01:23:25.085479 waagent[1820]: 2026-03-11T01:23:25.085412Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 11 01:23:25.090763 waagent[1820]: 2026-03-11T01:23:25.090715Z INFO Daemon Daemon Primary interface is [eth0] Mar 11 01:23:25.117491 systemd-networkd[1335]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:23:25.117497 systemd-networkd[1335]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:23:25.117541 systemd-networkd[1335]: eth0: DHCP lease lost Mar 11 01:23:25.119064 waagent[1820]: 2026-03-11T01:23:25.118984Z INFO Daemon Daemon Create user account if not exists Mar 11 01:23:25.123372 waagent[1820]: 2026-03-11T01:23:25.123324Z INFO Daemon Daemon User core already exists, skip useradd Mar 11 01:23:25.127761 waagent[1820]: 2026-03-11T01:23:25.127715Z INFO Daemon Daemon Configure sudoer Mar 11 01:23:25.131325 waagent[1820]: 2026-03-11T01:23:25.131275Z INFO Daemon Daemon Configure sshd Mar 11 01:23:25.134627 systemd-networkd[1335]: eth0: DHCPv6 lease lost Mar 11 01:23:25.135402 waagent[1820]: 2026-03-11T01:23:25.135350Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 11 01:23:25.145375 waagent[1820]: 2026-03-11T01:23:25.145319Z INFO Daemon Daemon Deploy ssh public key. Mar 11 01:23:25.155496 systemd-networkd[1335]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:23:26.263378 waagent[1820]: 2026-03-11T01:23:26.263305Z INFO Daemon Daemon Provisioning complete Mar 11 01:23:26.278622 waagent[1820]: 2026-03-11T01:23:26.278571Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 11 01:23:26.283538 waagent[1820]: 2026-03-11T01:23:26.283495Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 11 01:23:26.290672 waagent[1820]: 2026-03-11T01:23:26.290629Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 11 01:23:26.424217 waagent[1879]: 2026-03-11T01:23:26.423544Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 11 01:23:26.424217 waagent[1879]: 2026-03-11T01:23:26.423703Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 11 01:23:26.424217 waagent[1879]: 2026-03-11T01:23:26.423760Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 11 01:23:26.682360 waagent[1879]: 2026-03-11T01:23:26.682221Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 11 01:23:26.682763 waagent[1879]: 2026-03-11T01:23:26.682719Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:23:26.682903 waagent[1879]: 2026-03-11T01:23:26.682869Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:23:26.690584 waagent[1879]: 2026-03-11T01:23:26.690525Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 11 01:23:26.695558 waagent[1879]: 2026-03-11T01:23:26.695517Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 11 01:23:26.696113 waagent[1879]: 2026-03-11T01:23:26.696069Z INFO ExtHandler Mar 11 01:23:26.696258 waagent[1879]: 2026-03-11T01:23:26.696225Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 93cb0f9f-205c-4f00-b713-b4fdffd72a92 eTag: 8029725805345705222 source: Fabric] Mar 11 01:23:26.696665 waagent[1879]: 2026-03-11T01:23:26.696622Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 11 01:23:26.710472 waagent[1879]: 2026-03-11T01:23:26.710031Z INFO ExtHandler Mar 11 01:23:26.710472 waagent[1879]: 2026-03-11T01:23:26.710225Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 11 01:23:26.713948 waagent[1879]: 2026-03-11T01:23:26.713907Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 11 01:23:26.910501 waagent[1879]: 2026-03-11T01:23:26.909001Z INFO ExtHandler Downloaded certificate {'thumbprint': '6C7EDE9DBB1014635D501192E906268C29062EE1', 'hasPrivateKey': True} Mar 11 01:23:26.910501 waagent[1879]: 2026-03-11T01:23:26.909659Z INFO ExtHandler Fetch goal state completed Mar 11 01:23:26.924246 waagent[1879]: 2026-03-11T01:23:26.924187Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1879 Mar 11 01:23:26.924534 waagent[1879]: 2026-03-11T01:23:26.924493Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 11 01:23:26.926204 waagent[1879]: 2026-03-11T01:23:26.926155Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 11 01:23:26.926692 waagent[1879]: 2026-03-11T01:23:26.926648Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 11 01:23:27.067341 waagent[1879]: 2026-03-11T01:23:27.067245Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 11 01:23:27.067692 waagent[1879]: 2026-03-11T01:23:27.067648Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 11 01:23:27.073951 waagent[1879]: 2026-03-11T01:23:27.073916Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 11 01:23:27.080612 systemd[1]: Reloading requested from client PID 1894 ('systemctl') (unit waagent.service)... Mar 11 01:23:27.080888 systemd[1]: Reloading... Mar 11 01:23:27.171461 zram_generator::config[1931]: No configuration found. Mar 11 01:23:27.261505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:27.336848 systemd[1]: Reloading finished in 255 ms. Mar 11 01:23:27.366880 waagent[1879]: 2026-03-11T01:23:27.366469Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 11 01:23:27.372003 systemd[1]: Reloading requested from client PID 1982 ('systemctl') (unit waagent.service)... Mar 11 01:23:27.372016 systemd[1]: Reloading... Mar 11 01:23:27.446454 zram_generator::config[2016]: No configuration found. Mar 11 01:23:27.552400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:27.626825 systemd[1]: Reloading finished in 254 ms. Mar 11 01:23:27.650569 waagent[1879]: 2026-03-11T01:23:27.649774Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 11 01:23:27.650569 waagent[1879]: 2026-03-11T01:23:27.649937Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 11 01:23:28.109549 waagent[1879]: 2026-03-11T01:23:28.109404Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 11 01:23:28.110227 waagent[1879]: 2026-03-11T01:23:28.110181Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 11 01:23:28.111114 waagent[1879]: 2026-03-11T01:23:28.111063Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 11 01:23:28.111225 waagent[1879]: 2026-03-11T01:23:28.111183Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:23:28.111356 waagent[1879]: 2026-03-11T01:23:28.111315Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:23:28.111836 waagent[1879]: 2026-03-11T01:23:28.111784Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 11 01:23:28.112136 waagent[1879]: 2026-03-11T01:23:28.112024Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 11 01:23:28.112550 waagent[1879]: 2026-03-11T01:23:28.112496Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:23:28.112644 waagent[1879]: 2026-03-11T01:23:28.112610Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:23:28.112787 waagent[1879]: 2026-03-11T01:23:28.112747Z INFO EnvHandler ExtHandler Configure routes Mar 11 01:23:28.112850 waagent[1879]: 2026-03-11T01:23:28.112823Z INFO EnvHandler ExtHandler Gateway:None Mar 11 01:23:28.112899 waagent[1879]: 2026-03-11T01:23:28.112875Z INFO EnvHandler ExtHandler Routes:None Mar 11 01:23:28.113252 waagent[1879]: 2026-03-11T01:23:28.113190Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 11 01:23:28.113499 waagent[1879]: 2026-03-11T01:23:28.113423Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 11 01:23:28.113990 waagent[1879]: 2026-03-11T01:23:28.113932Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 11 01:23:28.114155 waagent[1879]: 2026-03-11T01:23:28.114099Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 11 01:23:28.114366 waagent[1879]: 2026-03-11T01:23:28.114321Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 11 01:23:28.114366 waagent[1879]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 11 01:23:28.114366 waagent[1879]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 11 01:23:28.114366 waagent[1879]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 11 01:23:28.114366 waagent[1879]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:23:28.114366 waagent[1879]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:23:28.114366 waagent[1879]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:23:28.114769 waagent[1879]: 2026-03-11T01:23:28.114572Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 11 01:23:28.120233 waagent[1879]: 2026-03-11T01:23:28.120185Z INFO ExtHandler ExtHandler Mar 11 01:23:28.122465 waagent[1879]: 2026-03-11T01:23:28.120386Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 44b3719d-7e03-4ea3-97e8-5a18e3996456 correlation 801f7511-b5ae-41c4-aa34-c8bfc65f658b created: 2026-03-11T01:22:32.008125Z] Mar 11 01:23:28.125456 waagent[1879]: 2026-03-11T01:23:28.125110Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 11 01:23:28.125731 waagent[1879]: 2026-03-11T01:23:28.125684Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Mar 11 01:23:28.160143 waagent[1879]: 2026-03-11T01:23:28.160010Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 909EC685-43E4-4F9B-9D16-72E72663B43E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 11 01:23:28.199468 waagent[1879]: 2026-03-11T01:23:28.199323Z INFO MonitorHandler ExtHandler Network interfaces: Mar 11 01:23:28.199468 waagent[1879]: Executing ['ip', '-a', '-o', 'link']: Mar 11 01:23:28.199468 waagent[1879]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 11 01:23:28.199468 waagent[1879]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:b9:39 brd ff:ff:ff:ff:ff:ff Mar 11 01:23:28.199468 waagent[1879]: 3: enP10151s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:b9:39 brd ff:ff:ff:ff:ff:ff\ altname enP10151p0s2 Mar 11 01:23:28.199468 waagent[1879]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 11 01:23:28.199468 waagent[1879]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 11 01:23:28.199468 waagent[1879]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 11 01:23:28.199468 waagent[1879]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 11 01:23:28.199468 waagent[1879]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 11 01:23:28.199468 waagent[1879]: 2: eth0 inet6 fe80::7eed:8dff:fe89:b939/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 11 01:23:28.225561 waagent[1879]: 2026-03-11T01:23:28.225485Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 11 01:23:28.225561 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.225561 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.225561 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.225561 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.225561 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.225561 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.225561 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 11 01:23:28.225561 waagent[1879]: 6 886 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 11 01:23:28.225561 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 11 01:23:28.229503 waagent[1879]: 2026-03-11T01:23:28.229424Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 11 01:23:28.229503 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.229503 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.229503 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.229503 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.229503 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:23:28.229503 waagent[1879]: pkts bytes target prot opt in out source destination Mar 11 01:23:28.229503 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 11 01:23:28.229503 waagent[1879]: 10 1301 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 11 01:23:28.229503 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 11 01:23:28.229779 waagent[1879]: 2026-03-11T01:23:28.229721Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 11 01:23:32.969617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 01:23:32.977682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:33.078061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:33.082051 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:33.188566 kubelet[2109]: E0311 01:23:33.188514 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:33.191275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:33.191402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:43.312505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 11 01:23:43.319819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:43.617419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:43.620982 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:43.652799 kubelet[2124]: E0311 01:23:43.652748 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:43.655008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:43.655133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:44.824048 chronyd[1668]: Selected source PHC0 Mar 11 01:23:48.246384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 01:23:48.248591 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:40578.service - OpenSSH per-connection server daemon (10.200.16.10:40578). Mar 11 01:23:48.835491 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 40578 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:48.836847 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:48.840485 systemd-logind[1683]: New session 3 of user core. Mar 11 01:23:48.851553 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 01:23:49.255197 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:40590.service - OpenSSH per-connection server daemon (10.200.16.10:40590). Mar 11 01:23:49.712404 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 40590 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:49.713759 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:49.717491 systemd-logind[1683]: New session 4 of user core. Mar 11 01:23:49.724549 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 01:23:50.042173 sshd[2137]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:50.046063 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:40590.service: Deactivated successfully. Mar 11 01:23:50.047827 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 01:23:50.048515 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Mar 11 01:23:50.049450 systemd-logind[1683]: Removed session 4. Mar 11 01:23:50.129695 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:43700.service - OpenSSH per-connection server daemon (10.200.16.10:43700). Mar 11 01:23:50.618820 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 43700 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:50.620126 sshd[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:50.624564 systemd-logind[1683]: New session 5 of user core. Mar 11 01:23:50.627579 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 01:23:50.961831 sshd[2144]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:50.965193 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:43700.service: Deactivated successfully. Mar 11 01:23:50.966839 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 01:23:50.968117 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Mar 11 01:23:50.969154 systemd-logind[1683]: Removed session 5. Mar 11 01:23:51.049540 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:43704.service - OpenSSH per-connection server daemon (10.200.16.10:43704). Mar 11 01:23:51.545343 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 43704 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:51.546154 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:51.551029 systemd-logind[1683]: New session 6 of user core. Mar 11 01:23:51.553620 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 01:23:51.890981 sshd[2151]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:51.894759 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:43704.service: Deactivated successfully. Mar 11 01:23:51.898580 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 01:23:51.899349 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Mar 11 01:23:51.900366 systemd-logind[1683]: Removed session 6. Mar 11 01:23:51.974702 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:43718.service - OpenSSH per-connection server daemon (10.200.16.10:43718). Mar 11 01:23:52.427059 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 43718 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:52.427869 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:52.431337 systemd-logind[1683]: New session 7 of user core. Mar 11 01:23:52.439613 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 01:23:52.858988 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 01:23:52.859268 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:52.888265 sudo[2161]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:52.967004 sshd[2158]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:52.970902 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:43718.service: Deactivated successfully. Mar 11 01:23:52.972361 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 01:23:52.972982 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Mar 11 01:23:52.974065 systemd-logind[1683]: Removed session 7. Mar 11 01:23:53.048880 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:43726.service - OpenSSH per-connection server daemon (10.200.16.10:43726). Mar 11 01:23:53.501088 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 43726 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:53.501932 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:53.505417 systemd-logind[1683]: New session 8 of user core. Mar 11 01:23:53.515547 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 01:23:53.756563 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 01:23:53.757188 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:53.757899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 11 01:23:53.765696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:53.768959 sudo[2170]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:53.774520 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 01:23:53.775057 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:53.790968 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 01:23:53.792509 auditctl[2176]: No rules Mar 11 01:23:53.793764 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 01:23:53.794099 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 01:23:53.799628 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 01:23:53.829096 augenrules[2194]: No rules Mar 11 01:23:53.830097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 01:23:53.831166 sudo[2169]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:53.908966 sshd[2166]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:53.918787 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:43726.service: Deactivated successfully. Mar 11 01:23:53.923573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:53.923949 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 01:23:53.925354 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Mar 11 01:23:53.925999 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:53.927255 systemd-logind[1683]: Removed session 8. Mar 11 01:23:53.958326 kubelet[2205]: E0311 01:23:53.958239 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:53.960677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:53.960919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:53.993403 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:43728.service - OpenSSH per-connection server daemon (10.200.16.10:43728). Mar 11 01:23:54.442808 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 43728 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:54.444089 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:54.448604 systemd-logind[1683]: New session 9 of user core. Mar 11 01:23:54.453681 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 01:23:54.698982 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 01:23:54.699816 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:55.576851 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 01:23:55.576863 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 01:23:56.051509 dockerd[2232]: time="2026-03-11T01:23:56.051104583Z" level=info msg="Starting up" Mar 11 01:23:56.439709 dockerd[2232]: time="2026-03-11T01:23:56.439670581Z" level=info msg="Loading containers: start." Mar 11 01:23:56.577587 kernel: Initializing XFRM netlink socket Mar 11 01:23:56.712820 systemd-networkd[1335]: docker0: Link UP Mar 11 01:23:56.738949 dockerd[2232]: time="2026-03-11T01:23:56.738416875Z" level=info msg="Loading containers: done." Mar 11 01:23:56.759139 dockerd[2232]: time="2026-03-11T01:23:56.759101903Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 01:23:56.759403 dockerd[2232]: time="2026-03-11T01:23:56.759384102Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 01:23:56.759596 dockerd[2232]: time="2026-03-11T01:23:56.759580062Z" level=info msg="Daemon has completed initialization" Mar 11 01:23:56.830297 dockerd[2232]: time="2026-03-11T01:23:56.830230578Z" level=info msg="API listen on /run/docker.sock" Mar 11 01:23:56.830765 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 01:23:57.260634 containerd[1710]: time="2026-03-11T01:23:57.260397911Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 11 01:23:58.263227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240698330.mount: Deactivated successfully. Mar 11 01:24:00.014469 containerd[1710]: time="2026-03-11T01:24:00.014414802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:00.020225 containerd[1710]: time="2026-03-11T01:24:00.020195481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 11 01:24:00.024183 containerd[1710]: time="2026-03-11T01:24:00.024154081Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:00.029280 containerd[1710]: time="2026-03-11T01:24:00.029237720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:00.030672 containerd[1710]: time="2026-03-11T01:24:00.030240080Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.769805049s" Mar 11 01:24:00.030672 containerd[1710]: time="2026-03-11T01:24:00.030273400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 11 01:24:00.030802 containerd[1710]: time="2026-03-11T01:24:00.030761720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 11 01:24:01.471620 containerd[1710]: time="2026-03-11T01:24:01.470590881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:01.474067 containerd[1710]: time="2026-03-11T01:24:01.474039081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 11 01:24:01.478002 containerd[1710]: time="2026-03-11T01:24:01.477975521Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:01.485336 containerd[1710]: time="2026-03-11T01:24:01.485308280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:01.486164 containerd[1710]: time="2026-03-11T01:24:01.486131720Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.45533472s" Mar 11 01:24:01.486230 containerd[1710]: time="2026-03-11T01:24:01.486167520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 11 01:24:01.486702 containerd[1710]: time="2026-03-11T01:24:01.486580360Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 11 01:24:02.823243 containerd[1710]: time="2026-03-11T01:24:02.823192078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:02.826965 containerd[1710]: time="2026-03-11T01:24:02.826930478Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 11 01:24:02.830648 containerd[1710]: time="2026-03-11T01:24:02.830605078Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:02.836449 containerd[1710]: time="2026-03-11T01:24:02.836140158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:02.837368 containerd[1710]: time="2026-03-11T01:24:02.837194438Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.350575998s" Mar 11 01:24:02.837368 containerd[1710]: time="2026-03-11T01:24:02.837227838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 11 01:24:02.838450 containerd[1710]: time="2026-03-11T01:24:02.837791478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 11 01:24:04.062268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 11 01:24:04.072809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:04.201659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:04.205653 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:24:04.598944 kubelet[2442]: E0311 01:24:04.337553 2442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:24:04.340098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:24:04.340248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:24:04.870463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901403545.mount: Deactivated successfully. Mar 11 01:24:05.098993 containerd[1710]: time="2026-03-11T01:24:05.098941753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:05.102145 containerd[1710]: time="2026-03-11T01:24:05.102116473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 11 01:24:05.106276 containerd[1710]: time="2026-03-11T01:24:05.106204473Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:05.112450 containerd[1710]: time="2026-03-11T01:24:05.110690033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:05.112450 containerd[1710]: time="2026-03-11T01:24:05.112327353Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 2.274499315s" Mar 11 01:24:05.112450 containerd[1710]: time="2026-03-11T01:24:05.112358953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 11 01:24:05.113252 containerd[1710]: time="2026-03-11T01:24:05.113228473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 11 01:24:05.201966 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 11 01:24:05.808303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136917927.mount: Deactivated successfully. Mar 11 01:24:06.487464 update_engine[1685]: I20260311 01:24:06.486937 1685 update_attempter.cc:509] Updating boot flags... Mar 11 01:24:06.544948 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2517) Mar 11 01:24:06.619455 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2519) Mar 11 01:24:07.296470 containerd[1710]: time="2026-03-11T01:24:07.296164309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:07.299633 containerd[1710]: time="2026-03-11T01:24:07.299341309Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 11 01:24:07.303868 containerd[1710]: time="2026-03-11T01:24:07.303367229Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:07.309308 containerd[1710]: time="2026-03-11T01:24:07.309267349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:07.310425 containerd[1710]: time="2026-03-11T01:24:07.310391909Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.197053156s" Mar 11 01:24:07.310425 containerd[1710]: time="2026-03-11T01:24:07.310423669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 11 01:24:07.311055 containerd[1710]: time="2026-03-11T01:24:07.311031389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 01:24:08.319659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055456011.mount: Deactivated successfully. Mar 11 01:24:08.341961 containerd[1710]: time="2026-03-11T01:24:08.341167187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:08.344825 containerd[1710]: time="2026-03-11T01:24:08.344798147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 11 01:24:08.349446 containerd[1710]: time="2026-03-11T01:24:08.348375227Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:08.353037 containerd[1710]: time="2026-03-11T01:24:08.353004067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:08.353893 containerd[1710]: time="2026-03-11T01:24:08.353859387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.042798518s" Mar 11 01:24:08.354002 containerd[1710]: time="2026-03-11T01:24:08.353984547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 11 01:24:08.354785 containerd[1710]: time="2026-03-11T01:24:08.354746147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 11 01:24:09.118179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161839647.mount: Deactivated successfully. Mar 11 01:24:10.198000 containerd[1710]: time="2026-03-11T01:24:10.197953280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:10.201934 containerd[1710]: time="2026-03-11T01:24:10.201900638Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 11 01:24:10.205608 containerd[1710]: time="2026-03-11T01:24:10.205542877Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:10.210889 containerd[1710]: time="2026-03-11T01:24:10.210846635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:10.212019 containerd[1710]: time="2026-03-11T01:24:10.211860995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.856981808s" Mar 11 01:24:10.212019 containerd[1710]: time="2026-03-11T01:24:10.211893795Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 11 01:24:13.542840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:13.550639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:13.586264 systemd[1]: Reloading requested from client PID 2670 ('systemctl') (unit session-9.scope)... Mar 11 01:24:13.586282 systemd[1]: Reloading... Mar 11 01:24:13.673460 zram_generator::config[2713]: No configuration found. Mar 11 01:24:13.767958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:24:13.847745 systemd[1]: Reloading finished in 261 ms. Mar 11 01:24:13.964620 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 01:24:13.964705 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 01:24:13.964939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:13.982695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:14.682097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:14.687359 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:24:14.727913 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:24:15.321756 kubelet[2774]: I0311 01:24:15.321268 2774 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 11 01:24:15.321756 kubelet[2774]: I0311 01:24:15.321312 2774 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:24:15.323462 kubelet[2774]: I0311 01:24:15.322793 2774 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:24:15.323462 kubelet[2774]: I0311 01:24:15.322814 2774 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:24:15.323462 kubelet[2774]: I0311 01:24:15.323305 2774 server.go:951] "Client rotation is on, will bootstrap in background" Mar 11 01:24:15.333590 kubelet[2774]: E0311 01:24:15.333552 2774 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:24:15.333892 kubelet[2774]: I0311 01:24:15.333873 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:24:15.336262 kubelet[2774]: E0311 01:24:15.336234 2774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 01:24:15.336412 kubelet[2774]: I0311 01:24:15.336400 2774 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 01:24:15.339041 kubelet[2774]: I0311 01:24:15.339024 2774 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:24:15.339913 kubelet[2774]: I0311 01:24:15.339883 2774 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:24:15.340128 kubelet[2774]: I0311 01:24:15.339985 2774 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-541af3988c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:24:15.340255 kubelet[2774]: I0311 01:24:15.340246 2774 topology_manager.go:143] "Creating topology manager with none policy" Mar 11 01:24:15.340308 kubelet[2774]: I0311 01:24:15.340297 2774 container_manager_linux.go:308] "Creating device plugin manager" Mar 11 01:24:15.340448 kubelet[2774]: I0311 01:24:15.340427 2774 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:24:15.345992 kubelet[2774]: I0311 01:24:15.345974 2774 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 11 01:24:15.346201 kubelet[2774]: I0311 01:24:15.346190 2774 kubelet.go:482] "Attempting to sync node with API server" Mar 11 01:24:15.346268 kubelet[2774]: I0311 01:24:15.346259 2774 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:24:15.346325 kubelet[2774]: I0311 01:24:15.346318 2774 kubelet.go:394] "Adding apiserver pod source" Mar 11 01:24:15.346375 kubelet[2774]: I0311 01:24:15.346367 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:24:15.349271 kubelet[2774]: I0311 01:24:15.349255 2774 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 01:24:15.351823 kubelet[2774]: I0311 01:24:15.350180 2774 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:24:15.351823 kubelet[2774]: I0311 01:24:15.350214 2774 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:24:15.351823 kubelet[2774]: W0311 01:24:15.350250 2774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 01:24:15.352602 kubelet[2774]: I0311 01:24:15.352583 2774 server.go:1257] "Started kubelet" Mar 11 01:24:15.356998 kubelet[2774]: I0311 01:24:15.356954 2774 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:24:15.358165 kubelet[2774]: E0311 01:24:15.357198 2774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-541af3988c.189ba4f622c09a2c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-541af3988c,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-541af3988c,},FirstTimestamp:2026-03-11 01:24:15.352379948 +0000 UTC m=+0.662019389,LastTimestamp:2026-03-11 01:24:15.352379948 +0000 UTC m=+0.662019389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-541af3988c,}" Mar 11 01:24:15.358543 kubelet[2774]: I0311 01:24:15.358525 2774 server.go:317] "Adding debug handlers to kubelet server" Mar 11 01:24:15.360552 kubelet[2774]: I0311 01:24:15.359681 2774 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:24:15.360552 kubelet[2774]: I0311 01:24:15.359736 2774 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:24:15.360552 kubelet[2774]: I0311 01:24:15.359972 2774 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:24:15.362050 kubelet[2774]: I0311 01:24:15.362014 2774 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 11 01:24:15.364266 kubelet[2774]: E0311 01:24:15.364247 2774 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:24:15.364572 kubelet[2774]: I0311 01:24:15.364556 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:24:15.366415 kubelet[2774]: E0311 01:24:15.366388 2774 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-541af3988c\" not found" Mar 11 01:24:15.367447 kubelet[2774]: I0311 01:24:15.367422 2774 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 11 01:24:15.367604 kubelet[2774]: I0311 01:24:15.367593 2774 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:24:15.367710 kubelet[2774]: I0311 01:24:15.367699 2774 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:24:15.367946 kubelet[2774]: I0311 01:24:15.367931 2774 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:24:15.368110 kubelet[2774]: I0311 01:24:15.368093 2774 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:24:15.368520 kubelet[2774]: E0311 01:24:15.367967 2774 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-541af3988c?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Mar 11 01:24:15.369657 kubelet[2774]: I0311 01:24:15.369639 2774 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:24:15.378333 kubelet[2774]: I0311 01:24:15.378290 2774 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:24:15.379225 kubelet[2774]: I0311 01:24:15.379201 2774 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:24:15.379225 kubelet[2774]: I0311 01:24:15.379222 2774 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 11 01:24:15.379309 kubelet[2774]: I0311 01:24:15.379254 2774 kubelet.go:2501] "Starting kubelet main sync loop" Mar 11 01:24:15.379309 kubelet[2774]: E0311 01:24:15.379290 2774 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:24:15.408252 kubelet[2774]: I0311 01:24:15.408231 2774 cpu_manager.go:225] "Starting" policy="none" Mar 11 01:24:15.408398 kubelet[2774]: I0311 01:24:15.408387 2774 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 01:24:15.408825 kubelet[2774]: I0311 01:24:15.408772 2774 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 11 01:24:15.415314 kubelet[2774]: I0311 01:24:15.415294 2774 policy_none.go:50] "Start" Mar 11 01:24:15.415314 kubelet[2774]: I0311 01:24:15.415316 2774 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:24:15.415409 kubelet[2774]: I0311 01:24:15.415328 2774 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:24:15.420136 kubelet[2774]: I0311 01:24:15.420120 2774 policy_none.go:44] "Start" Mar 11 01:24:15.424198 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 01:24:15.432480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 01:24:15.435611 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 01:24:15.446307 kubelet[2774]: E0311 01:24:15.446280 2774 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:24:15.446716 kubelet[2774]: I0311 01:24:15.446701 2774 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 11 01:24:15.447134 kubelet[2774]: I0311 01:24:15.446812 2774 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:24:15.447134 kubelet[2774]: I0311 01:24:15.447049 2774 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 11 01:24:15.449453 kubelet[2774]: E0311 01:24:15.448927 2774 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:24:15.449453 kubelet[2774]: E0311 01:24:15.448971 2774 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-541af3988c\" not found" Mar 11 01:24:15.492932 systemd[1]: Created slice kubepods-burstable-pod01249acea301953c7e74769d14091124.slice - libcontainer container kubepods-burstable-pod01249acea301953c7e74769d14091124.slice. Mar 11 01:24:15.498142 kubelet[2774]: E0311 01:24:15.498118 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.502191 systemd[1]: Created slice kubepods-burstable-pod7104325e5aeeafd59243ca31b6bb7c58.slice - libcontainer container kubepods-burstable-pod7104325e5aeeafd59243ca31b6bb7c58.slice. Mar 11 01:24:15.505219 kubelet[2774]: E0311 01:24:15.505193 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.520286 systemd[1]: Created slice kubepods-burstable-pod7b543be173f98fbf358bef08f3c5dae2.slice - libcontainer container kubepods-burstable-pod7b543be173f98fbf358bef08f3c5dae2.slice. Mar 11 01:24:15.521696 kubelet[2774]: E0311 01:24:15.521675 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.549184 kubelet[2774]: I0311 01:24:15.549162 2774 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.549699 kubelet[2774]: E0311 01:24:15.549671 2774 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569129 kubelet[2774]: I0311 01:24:15.568930 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569129 kubelet[2774]: I0311 01:24:15.568956 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569129 kubelet[2774]: I0311 01:24:15.568976 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569129 kubelet[2774]: I0311 01:24:15.568991 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01249acea301953c7e74769d14091124-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-541af3988c\" (UID: \"01249acea301953c7e74769d14091124\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569129 kubelet[2774]: I0311 01:24:15.569005 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569295 kubelet[2774]: I0311 01:24:15.569019 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569295 kubelet[2774]: I0311 01:24:15.569038 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569295 kubelet[2774]: I0311 01:24:15.569052 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569295 kubelet[2774]: I0311 01:24:15.569068 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.569295 kubelet[2774]: E0311 01:24:15.569073 2774 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-541af3988c?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Mar 11 01:24:15.752268 kubelet[2774]: I0311 01:24:15.751824 2774 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.752268 kubelet[2774]: E0311 01:24:15.752116 2774 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:15.807396 containerd[1710]: time="2026-03-11T01:24:15.807148732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-541af3988c,Uid:01249acea301953c7e74769d14091124,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:15.812086 containerd[1710]: time="2026-03-11T01:24:15.811970491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-541af3988c,Uid:7104325e5aeeafd59243ca31b6bb7c58,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:15.827943 containerd[1710]: time="2026-03-11T01:24:15.827726489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-541af3988c,Uid:7b543be173f98fbf358bef08f3c5dae2,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:15.970388 kubelet[2774]: E0311 01:24:15.970347 2774 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-541af3988c?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Mar 11 01:24:16.155024 kubelet[2774]: I0311 01:24:16.154671 2774 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:16.155024 kubelet[2774]: E0311 01:24:16.154976 2774 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:16.508918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493967227.mount: Deactivated successfully. Mar 11 01:24:16.540466 containerd[1710]: time="2026-03-11T01:24:16.540199680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:24:16.543975 containerd[1710]: time="2026-03-11T01:24:16.543664280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 11 01:24:16.547635 containerd[1710]: time="2026-03-11T01:24:16.547600199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:24:16.552387 containerd[1710]: time="2026-03-11T01:24:16.551666719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:24:16.555134 containerd[1710]: time="2026-03-11T01:24:16.555102799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 01:24:16.559460 containerd[1710]: time="2026-03-11T01:24:16.558757438Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:24:16.563679 containerd[1710]: time="2026-03-11T01:24:16.562418238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 01:24:16.567133 containerd[1710]: time="2026-03-11T01:24:16.566878797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:24:16.567924 containerd[1710]: time="2026-03-11T01:24:16.567702037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 755.671986ms" Mar 11 01:24:16.569970 containerd[1710]: time="2026-03-11T01:24:16.569934117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 742.149068ms" Mar 11 01:24:16.570649 containerd[1710]: time="2026-03-11T01:24:16.570621797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 763.395425ms" Mar 11 01:24:16.771778 kubelet[2774]: E0311 01:24:16.771666 2774 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-541af3988c?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Mar 11 01:24:16.957426 kubelet[2774]: I0311 01:24:16.957182 2774 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:16.957549 kubelet[2774]: E0311 01:24:16.957486 2774 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.204691224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.204778584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.204889744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.201665064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.204256544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.204269944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.205832 containerd[1710]: time="2026-03-11T01:24:17.205532024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.206834 containerd[1710]: time="2026-03-11T01:24:17.205962584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:17.206834 containerd[1710]: time="2026-03-11T01:24:17.206018464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:17.206834 containerd[1710]: time="2026-03-11T01:24:17.206034224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.206834 containerd[1710]: time="2026-03-11T01:24:17.206107345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.206834 containerd[1710]: time="2026-03-11T01:24:17.206358305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:17.230961 systemd[1]: Started cri-containerd-b0646981311ba44d957f2325d4230821971c7bd8854070991acdb4e3ecccf901.scope - libcontainer container b0646981311ba44d957f2325d4230821971c7bd8854070991acdb4e3ecccf901. Mar 11 01:24:17.238253 systemd[1]: Started cri-containerd-04deec9e8ef188c2805f9306f4aab7a8e7d7e5236beae2eff612ee6f6318bf94.scope - libcontainer container 04deec9e8ef188c2805f9306f4aab7a8e7d7e5236beae2eff612ee6f6318bf94. Mar 11 01:24:17.241576 systemd[1]: Started cri-containerd-c9574365626e216aed58cf85748f1e67ed2ffd67d2b72c10be30441e73bab3b1.scope - libcontainer container c9574365626e216aed58cf85748f1e67ed2ffd67d2b72c10be30441e73bab3b1. Mar 11 01:24:17.291070 containerd[1710]: time="2026-03-11T01:24:17.289547071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-541af3988c,Uid:7b543be173f98fbf358bef08f3c5dae2,Namespace:kube-system,Attempt:0,} returns sandbox id \"04deec9e8ef188c2805f9306f4aab7a8e7d7e5236beae2eff612ee6f6318bf94\"" Mar 11 01:24:17.299193 containerd[1710]: time="2026-03-11T01:24:17.299154951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-541af3988c,Uid:7104325e5aeeafd59243ca31b6bb7c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9574365626e216aed58cf85748f1e67ed2ffd67d2b72c10be30441e73bab3b1\"" Mar 11 01:24:17.300404 containerd[1710]: time="2026-03-11T01:24:17.300372511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-541af3988c,Uid:01249acea301953c7e74769d14091124,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0646981311ba44d957f2325d4230821971c7bd8854070991acdb4e3ecccf901\"" Mar 11 01:24:17.304959 containerd[1710]: time="2026-03-11T01:24:17.304926752Z" level=info msg="CreateContainer within sandbox \"04deec9e8ef188c2805f9306f4aab7a8e7d7e5236beae2eff612ee6f6318bf94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 01:24:17.312116 containerd[1710]: time="2026-03-11T01:24:17.312081872Z" level=info msg="CreateContainer within sandbox \"b0646981311ba44d957f2325d4230821971c7bd8854070991acdb4e3ecccf901\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 01:24:17.318105 containerd[1710]: time="2026-03-11T01:24:17.318077113Z" level=info msg="CreateContainer within sandbox \"c9574365626e216aed58cf85748f1e67ed2ffd67d2b72c10be30441e73bab3b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 01:24:17.383241 containerd[1710]: time="2026-03-11T01:24:17.383195477Z" level=info msg="CreateContainer within sandbox \"04deec9e8ef188c2805f9306f4aab7a8e7d7e5236beae2eff612ee6f6318bf94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fcac1f66aa4535b62f0935e104d70f20cc1ae10c00e563c7d2442b6484b92022\"" Mar 11 01:24:17.383989 containerd[1710]: time="2026-03-11T01:24:17.383961277Z" level=info msg="StartContainer for \"fcac1f66aa4535b62f0935e104d70f20cc1ae10c00e563c7d2442b6484b92022\"" Mar 11 01:24:17.390788 containerd[1710]: time="2026-03-11T01:24:17.390700118Z" level=info msg="CreateContainer within sandbox \"c9574365626e216aed58cf85748f1e67ed2ffd67d2b72c10be30441e73bab3b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0def8a30674c173b9636c12f7e95dbb309bdf11cbd9d2a0bc1eb0e0aaf099078\"" Mar 11 01:24:17.391582 containerd[1710]: time="2026-03-11T01:24:17.391485878Z" level=info msg="StartContainer for \"0def8a30674c173b9636c12f7e95dbb309bdf11cbd9d2a0bc1eb0e0aaf099078\"" Mar 11 01:24:17.395803 containerd[1710]: time="2026-03-11T01:24:17.395687478Z" level=info msg="CreateContainer within sandbox \"b0646981311ba44d957f2325d4230821971c7bd8854070991acdb4e3ecccf901\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fbb9e9176c99216a38740befb074d99368d9867417b1b34b443e4112b371d221\"" Mar 11 01:24:17.397309 containerd[1710]: time="2026-03-11T01:24:17.396316598Z" level=info msg="StartContainer for \"fbb9e9176c99216a38740befb074d99368d9867417b1b34b443e4112b371d221\"" Mar 11 01:24:17.417600 systemd[1]: Started cri-containerd-fcac1f66aa4535b62f0935e104d70f20cc1ae10c00e563c7d2442b6484b92022.scope - libcontainer container fcac1f66aa4535b62f0935e104d70f20cc1ae10c00e563c7d2442b6484b92022. Mar 11 01:24:17.431590 systemd[1]: Started cri-containerd-0def8a30674c173b9636c12f7e95dbb309bdf11cbd9d2a0bc1eb0e0aaf099078.scope - libcontainer container 0def8a30674c173b9636c12f7e95dbb309bdf11cbd9d2a0bc1eb0e0aaf099078. Mar 11 01:24:17.440622 systemd[1]: Started cri-containerd-fbb9e9176c99216a38740befb074d99368d9867417b1b34b443e4112b371d221.scope - libcontainer container fbb9e9176c99216a38740befb074d99368d9867417b1b34b443e4112b371d221. Mar 11 01:24:17.448228 kubelet[2774]: E0311 01:24:17.448181 2774 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:24:17.482315 containerd[1710]: time="2026-03-11T01:24:17.482222684Z" level=info msg="StartContainer for \"0def8a30674c173b9636c12f7e95dbb309bdf11cbd9d2a0bc1eb0e0aaf099078\" returns successfully" Mar 11 01:24:17.498784 containerd[1710]: time="2026-03-11T01:24:17.497729925Z" level=info msg="StartContainer for \"fcac1f66aa4535b62f0935e104d70f20cc1ae10c00e563c7d2442b6484b92022\" returns successfully" Mar 11 01:24:17.498784 containerd[1710]: time="2026-03-11T01:24:17.497808245Z" level=info msg="StartContainer for \"fbb9e9176c99216a38740befb074d99368d9867417b1b34b443e4112b371d221\" returns successfully" Mar 11 01:24:18.411051 kubelet[2774]: E0311 01:24:18.411012 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:18.413946 kubelet[2774]: E0311 01:24:18.413527 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:18.417839 kubelet[2774]: E0311 01:24:18.417823 2774 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:18.560049 kubelet[2774]: I0311 01:24:18.559971 2774 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:18.984919 kubelet[2774]: E0311 01:24:18.984880 2774 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-541af3988c\" not found" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.098069 kubelet[2774]: I0311 01:24:19.097895 2774 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.098069 kubelet[2774]: E0311 01:24:19.097932 2774 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-541af3988c\": node \"ci-4081.3.6-n-541af3988c\" not found" Mar 11 01:24:19.168084 kubelet[2774]: I0311 01:24:19.167551 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.174236 kubelet[2774]: E0311 01:24:19.174213 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.174418 kubelet[2774]: I0311 01:24:19.174349 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.177524 kubelet[2774]: E0311 01:24:19.177466 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.177524 kubelet[2774]: I0311 01:24:19.177487 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.179321 kubelet[2774]: E0311 01:24:19.179288 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.350755 kubelet[2774]: I0311 01:24:19.350558 2774 apiserver.go:52] "Watching apiserver" Mar 11 01:24:19.368242 kubelet[2774]: I0311 01:24:19.368210 2774 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:24:19.416644 kubelet[2774]: I0311 01:24:19.416616 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.418336 kubelet[2774]: I0311 01:24:19.416970 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.418336 kubelet[2774]: I0311 01:24:19.417270 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.420270 kubelet[2774]: E0311 01:24:19.419985 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.420270 kubelet[2774]: E0311 01:24:19.420173 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:19.420474 kubelet[2774]: E0311 01:24:19.420457 2774 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:20.418673 kubelet[2774]: I0311 01:24:20.418639 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:20.419044 kubelet[2774]: I0311 01:24:20.419023 2774 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:20.427654 kubelet[2774]: I0311 01:24:20.427630 2774 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:20.432890 kubelet[2774]: I0311 01:24:20.432857 2774 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:21.150565 systemd[1]: Reloading requested from client PID 3059 ('systemctl') (unit session-9.scope)... Mar 11 01:24:21.150582 systemd[1]: Reloading... Mar 11 01:24:21.274456 zram_generator::config[3114]: No configuration found. Mar 11 01:24:21.382982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:24:21.473304 systemd[1]: Reloading finished in 322 ms. Mar 11 01:24:21.507370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:21.524813 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 01:24:21.525140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:21.529748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:24:21.692738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:24:21.705793 (kubelet)[3163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:24:21.741332 kubelet[3163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:24:21.753481 kubelet[3163]: I0311 01:24:21.752997 3163 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 11 01:24:21.753481 kubelet[3163]: I0311 01:24:21.753036 3163 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:24:21.753481 kubelet[3163]: I0311 01:24:21.753058 3163 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:24:21.753481 kubelet[3163]: I0311 01:24:21.753063 3163 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:24:21.753834 kubelet[3163]: I0311 01:24:21.753821 3163 server.go:951] "Client rotation is on, will bootstrap in background" Mar 11 01:24:21.755323 kubelet[3163]: I0311 01:24:21.755306 3163 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 01:24:21.757558 kubelet[3163]: I0311 01:24:21.757501 3163 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:24:21.761456 kubelet[3163]: E0311 01:24:21.760861 3163 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 01:24:21.761456 kubelet[3163]: I0311 01:24:21.760914 3163 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 01:24:21.763752 kubelet[3163]: I0311 01:24:21.763603 3163 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:24:21.763834 kubelet[3163]: I0311 01:24:21.763809 3163 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:24:21.763979 kubelet[3163]: I0311 01:24:21.763832 3163 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-541af3988c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:24:21.764052 kubelet[3163]: I0311 01:24:21.763987 3163 topology_manager.go:143] "Creating topology manager with none policy" Mar 11 01:24:21.764052 kubelet[3163]: I0311 01:24:21.763995 3163 container_manager_linux.go:308] "Creating device plugin manager" Mar 11 01:24:21.764052 kubelet[3163]: I0311 01:24:21.764016 3163 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:24:21.764200 kubelet[3163]: I0311 01:24:21.764186 3163 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 11 01:24:21.764333 kubelet[3163]: I0311 01:24:21.764315 3163 kubelet.go:482] "Attempting to sync node with API server" Mar 11 01:24:21.764368 kubelet[3163]: I0311 01:24:21.764346 3163 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:24:21.764368 kubelet[3163]: I0311 01:24:21.764366 3163 kubelet.go:394] "Adding apiserver pod source" Mar 11 01:24:21.766557 kubelet[3163]: I0311 01:24:21.764375 3163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:24:21.767379 kubelet[3163]: I0311 01:24:21.767314 3163 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 01:24:21.769876 kubelet[3163]: I0311 01:24:21.769853 3163 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:24:21.770460 kubelet[3163]: I0311 01:24:21.769980 3163 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:24:21.775662 kubelet[3163]: I0311 01:24:21.775643 3163 server.go:1257] "Started kubelet" Mar 11 01:24:21.779703 kubelet[3163]: I0311 01:24:21.779656 3163 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 11 01:24:21.802710 kubelet[3163]: I0311 01:24:21.802632 3163 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:24:21.803643 kubelet[3163]: I0311 01:24:21.803628 3163 server.go:317] "Adding debug handlers to kubelet server" Mar 11 01:24:21.807018 kubelet[3163]: I0311 01:24:21.806988 3163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:24:21.808013 kubelet[3163]: I0311 01:24:21.807998 3163 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 11 01:24:21.808199 kubelet[3163]: E0311 01:24:21.808180 3163 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-541af3988c\" not found" Mar 11 01:24:21.808738 kubelet[3163]: I0311 01:24:21.808723 3163 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:24:21.808842 kubelet[3163]: I0311 01:24:21.808830 3163 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:24:21.813178 kubelet[3163]: I0311 01:24:21.813123 3163 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:24:21.813468 kubelet[3163]: I0311 01:24:21.813229 3163 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:24:21.813468 kubelet[3163]: I0311 01:24:21.813410 3163 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:24:21.816273 kubelet[3163]: I0311 01:24:21.816200 3163 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:24:21.816532 kubelet[3163]: I0311 01:24:21.816509 3163 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:24:21.819504 kubelet[3163]: I0311 01:24:21.818649 3163 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:24:21.819743 kubelet[3163]: I0311 01:24:21.819718 3163 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:24:21.819743 kubelet[3163]: I0311 01:24:21.819736 3163 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 11 01:24:21.819819 kubelet[3163]: I0311 01:24:21.819759 3163 kubelet.go:2501] "Starting kubelet main sync loop" Mar 11 01:24:21.819819 kubelet[3163]: E0311 01:24:21.819796 3163 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:24:21.822819 kubelet[3163]: I0311 01:24:21.822791 3163 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:24:21.849467 kubelet[3163]: E0311 01:24:21.848944 3163 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:24:21.896037 kubelet[3163]: I0311 01:24:21.896011 3163 cpu_manager.go:225] "Starting" policy="none" Mar 11 01:24:21.896445 kubelet[3163]: I0311 01:24:21.896415 3163 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 01:24:21.896545 kubelet[3163]: I0311 01:24:21.896534 3163 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 11 01:24:21.897627 kubelet[3163]: I0311 01:24:21.897607 3163 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 11 01:24:21.897745 kubelet[3163]: I0311 01:24:21.897719 3163 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 11 01:24:21.897899 kubelet[3163]: I0311 01:24:21.897865 3163 policy_none.go:50] "Start" Mar 11 01:24:21.897975 kubelet[3163]: I0311 01:24:21.897965 3163 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:24:21.898038 kubelet[3163]: I0311 01:24:21.898021 3163 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:24:21.898501 kubelet[3163]: I0311 01:24:21.898467 3163 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 01:24:21.898608 kubelet[3163]: I0311 01:24:21.898596 3163 policy_none.go:44] "Start" Mar 11 01:24:21.903082 kubelet[3163]: E0311 01:24:21.903062 3163 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:24:21.904131 kubelet[3163]: I0311 01:24:21.904115 3163 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 11 01:24:21.904426 kubelet[3163]: I0311 01:24:21.904318 3163 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:24:21.905062 kubelet[3163]: I0311 01:24:21.905046 3163 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 11 01:24:21.905971 kubelet[3163]: E0311 01:24:21.905949 3163 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:24:21.920464 kubelet[3163]: I0311 01:24:21.920416 3163 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:21.922461 kubelet[3163]: I0311 01:24:21.920801 3163 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:21.922461 kubelet[3163]: I0311 01:24:21.921082 3163 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:21.933492 kubelet[3163]: I0311 01:24:21.933459 3163 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:21.933878 kubelet[3163]: I0311 01:24:21.933845 3163 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:21.933878 kubelet[3163]: E0311 01:24:21.933883 3163 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-541af3988c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:21.935355 kubelet[3163]: I0311 01:24:21.935327 3163 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:21.935445 kubelet[3163]: E0311 01:24:21.935365 3163 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-541af3988c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.007718 kubelet[3163]: I0311 01:24:22.007621 3163 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.010883 kubelet[3163]: I0311 01:24:22.010487 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.010883 kubelet[3163]: I0311 01:24:22.010515 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.010883 kubelet[3163]: I0311 01:24:22.010543 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.010883 kubelet[3163]: I0311 01:24:22.010561 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01249acea301953c7e74769d14091124-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-541af3988c\" (UID: \"01249acea301953c7e74769d14091124\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.010883 kubelet[3163]: I0311 01:24:22.010577 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.011073 kubelet[3163]: I0311 01:24:22.010592 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.011073 kubelet[3163]: I0311 01:24:22.010609 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.011073 kubelet[3163]: I0311 01:24:22.010625 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7104325e5aeeafd59243ca31b6bb7c58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-541af3988c\" (UID: \"7104325e5aeeafd59243ca31b6bb7c58\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.011073 kubelet[3163]: I0311 01:24:22.010641 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b543be173f98fbf358bef08f3c5dae2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-541af3988c\" (UID: \"7b543be173f98fbf358bef08f3c5dae2\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.021286 kubelet[3163]: I0311 01:24:22.021258 3163 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.021388 kubelet[3163]: I0311 01:24:22.021326 3163 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.765644 kubelet[3163]: I0311 01:24:22.765289 3163 apiserver.go:52] "Watching apiserver" Mar 11 01:24:22.809272 kubelet[3163]: I0311 01:24:22.809195 3163 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:24:22.847058 kubelet[3163]: I0311 01:24:22.846555 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" podStartSLOduration=2.84654019 podStartE2EDuration="2.84654019s" podCreationTimestamp="2026-03-11 01:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:22.827468869 +0000 UTC m=+1.118770801" watchObservedRunningTime="2026-03-11 01:24:22.84654019 +0000 UTC m=+1.137842122" Mar 11 01:24:22.862150 kubelet[3163]: I0311 01:24:22.862093 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-541af3988c" podStartSLOduration=2.862079832 podStartE2EDuration="2.862079832s" podCreationTimestamp="2026-03-11 01:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:22.84692243 +0000 UTC m=+1.138224362" watchObservedRunningTime="2026-03-11 01:24:22.862079832 +0000 UTC m=+1.153381764" Mar 11 01:24:22.866476 kubelet[3163]: I0311 01:24:22.866083 3163 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:22.878745 kubelet[3163]: I0311 01:24:22.877830 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-541af3988c" podStartSLOduration=1.877820313 podStartE2EDuration="1.877820313s" podCreationTimestamp="2026-03-11 01:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:22.862423312 +0000 UTC m=+1.153725244" watchObservedRunningTime="2026-03-11 01:24:22.877820313 +0000 UTC m=+1.169122245" Mar 11 01:24:22.886716 kubelet[3163]: I0311 01:24:22.886684 3163 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:24:22.886957 kubelet[3163]: E0311 01:24:22.886856 3163 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-541af3988c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-541af3988c" Mar 11 01:24:26.142408 kubelet[3163]: I0311 01:24:26.142374 3163 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 01:24:26.143116 containerd[1710]: time="2026-03-11T01:24:26.142948902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 01:24:26.143814 kubelet[3163]: I0311 01:24:26.143091 3163 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 01:24:27.097933 systemd[1]: Created slice kubepods-besteffort-podddfb834a_feb7_4f24_b2b1_88d6d26679b7.slice - libcontainer container kubepods-besteffort-podddfb834a_feb7_4f24_b2b1_88d6d26679b7.slice. Mar 11 01:24:27.139276 kubelet[3163]: I0311 01:24:27.139225 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddfb834a-feb7-4f24-b2b1-88d6d26679b7-xtables-lock\") pod \"kube-proxy-8vv9p\" (UID: \"ddfb834a-feb7-4f24-b2b1-88d6d26679b7\") " pod="kube-system/kube-proxy-8vv9p" Mar 11 01:24:27.139552 kubelet[3163]: I0311 01:24:27.139259 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddfb834a-feb7-4f24-b2b1-88d6d26679b7-kube-proxy\") pod \"kube-proxy-8vv9p\" (UID: \"ddfb834a-feb7-4f24-b2b1-88d6d26679b7\") " pod="kube-system/kube-proxy-8vv9p" Mar 11 01:24:27.139552 kubelet[3163]: I0311 01:24:27.139477 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddfb834a-feb7-4f24-b2b1-88d6d26679b7-lib-modules\") pod \"kube-proxy-8vv9p\" (UID: \"ddfb834a-feb7-4f24-b2b1-88d6d26679b7\") " pod="kube-system/kube-proxy-8vv9p" Mar 11 01:24:27.139552 kubelet[3163]: I0311 01:24:27.139497 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7phg\" (UniqueName: \"kubernetes.io/projected/ddfb834a-feb7-4f24-b2b1-88d6d26679b7-kube-api-access-c7phg\") pod \"kube-proxy-8vv9p\" (UID: \"ddfb834a-feb7-4f24-b2b1-88d6d26679b7\") " pod="kube-system/kube-proxy-8vv9p" Mar 11 01:24:27.324529 systemd[1]: Created slice kubepods-besteffort-podd8ed3862_8a09_4bad_abbf_e81acd6d6fa8.slice - libcontainer container kubepods-besteffort-podd8ed3862_8a09_4bad_abbf_e81acd6d6fa8.slice. Mar 11 01:24:27.341278 kubelet[3163]: I0311 01:24:27.341229 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxcl\" (UniqueName: \"kubernetes.io/projected/d8ed3862-8a09-4bad-abbf-e81acd6d6fa8-kube-api-access-vzxcl\") pod \"tigera-operator-6cf4cccc57-dlj58\" (UID: \"d8ed3862-8a09-4bad-abbf-e81acd6d6fa8\") " pod="tigera-operator/tigera-operator-6cf4cccc57-dlj58" Mar 11 01:24:27.341278 kubelet[3163]: I0311 01:24:27.341275 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8ed3862-8a09-4bad-abbf-e81acd6d6fa8-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-dlj58\" (UID: \"d8ed3862-8a09-4bad-abbf-e81acd6d6fa8\") " pod="tigera-operator/tigera-operator-6cf4cccc57-dlj58" Mar 11 01:24:27.416563 containerd[1710]: time="2026-03-11T01:24:27.416375287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vv9p,Uid:ddfb834a-feb7-4f24-b2b1-88d6d26679b7,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:27.469048 containerd[1710]: time="2026-03-11T01:24:27.468817050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:27.469048 containerd[1710]: time="2026-03-11T01:24:27.468879090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:27.469048 containerd[1710]: time="2026-03-11T01:24:27.468893650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:27.469048 containerd[1710]: time="2026-03-11T01:24:27.468983530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:27.491583 systemd[1]: Started cri-containerd-2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b.scope - libcontainer container 2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b. Mar 11 01:24:27.510939 containerd[1710]: time="2026-03-11T01:24:27.510665812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vv9p,Uid:ddfb834a-feb7-4f24-b2b1-88d6d26679b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b\"" Mar 11 01:24:27.521768 containerd[1710]: time="2026-03-11T01:24:27.521726213Z" level=info msg="CreateContainer within sandbox \"2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 01:24:27.562348 containerd[1710]: time="2026-03-11T01:24:27.562295135Z" level=info msg="CreateContainer within sandbox \"2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f41f54d0d580e361ed6b666512625de1782972025dfbe91d713e798a1c9619e2\"" Mar 11 01:24:27.563143 containerd[1710]: time="2026-03-11T01:24:27.563070335Z" level=info msg="StartContainer for \"f41f54d0d580e361ed6b666512625de1782972025dfbe91d713e798a1c9619e2\"" Mar 11 01:24:27.586686 systemd[1]: Started cri-containerd-f41f54d0d580e361ed6b666512625de1782972025dfbe91d713e798a1c9619e2.scope - libcontainer container f41f54d0d580e361ed6b666512625de1782972025dfbe91d713e798a1c9619e2. Mar 11 01:24:27.614121 containerd[1710]: time="2026-03-11T01:24:27.614067378Z" level=info msg="StartContainer for \"f41f54d0d580e361ed6b666512625de1782972025dfbe91d713e798a1c9619e2\" returns successfully" Mar 11 01:24:27.635895 containerd[1710]: time="2026-03-11T01:24:27.635855099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-dlj58,Uid:d8ed3862-8a09-4bad-abbf-e81acd6d6fa8,Namespace:tigera-operator,Attempt:0,}" Mar 11 01:24:27.682991 containerd[1710]: time="2026-03-11T01:24:27.682801421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:27.683587 containerd[1710]: time="2026-03-11T01:24:27.682973781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:27.683587 containerd[1710]: time="2026-03-11T01:24:27.683359581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:27.684222 containerd[1710]: time="2026-03-11T01:24:27.683579461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:27.701567 systemd[1]: Started cri-containerd-20b62966ecc1ab4afb9baf2408bf8d1114b1aaebd16236faed4414fc1bf55603.scope - libcontainer container 20b62966ecc1ab4afb9baf2408bf8d1114b1aaebd16236faed4414fc1bf55603. Mar 11 01:24:27.733424 containerd[1710]: time="2026-03-11T01:24:27.733083864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-dlj58,Uid:d8ed3862-8a09-4bad-abbf-e81acd6d6fa8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"20b62966ecc1ab4afb9baf2408bf8d1114b1aaebd16236faed4414fc1bf55603\"" Mar 11 01:24:27.736613 containerd[1710]: time="2026-03-11T01:24:27.736586304Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 01:24:27.890512 kubelet[3163]: I0311 01:24:27.890451 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8vv9p" podStartSLOduration=0.890424992 podStartE2EDuration="890.424992ms" podCreationTimestamp="2026-03-11 01:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:27.890316672 +0000 UTC m=+6.181618604" watchObservedRunningTime="2026-03-11 01:24:27.890424992 +0000 UTC m=+6.181726884" Mar 11 01:24:28.255615 systemd[1]: run-containerd-runc-k8s.io-2026b66e56493b80df9c4d548a7250e3f7885501f4b4ca7e43a1da9069425f7b-runc.T5ljRF.mount: Deactivated successfully. Mar 11 01:24:29.412606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58879006.mount: Deactivated successfully. Mar 11 01:24:30.377460 containerd[1710]: time="2026-03-11T01:24:30.377242599Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:30.380556 containerd[1710]: time="2026-03-11T01:24:30.380523959Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 11 01:24:30.384262 containerd[1710]: time="2026-03-11T01:24:30.384217159Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:30.389985 containerd[1710]: time="2026-03-11T01:24:30.389936159Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:30.391151 containerd[1710]: time="2026-03-11T01:24:30.390631519Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.653950175s" Mar 11 01:24:30.391151 containerd[1710]: time="2026-03-11T01:24:30.390667079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 11 01:24:30.413143 containerd[1710]: time="2026-03-11T01:24:30.413109080Z" level=info msg="CreateContainer within sandbox \"20b62966ecc1ab4afb9baf2408bf8d1114b1aaebd16236faed4414fc1bf55603\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 01:24:30.453517 containerd[1710]: time="2026-03-11T01:24:30.453409523Z" level=info msg="CreateContainer within sandbox \"20b62966ecc1ab4afb9baf2408bf8d1114b1aaebd16236faed4414fc1bf55603\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8\"" Mar 11 01:24:30.454084 containerd[1710]: time="2026-03-11T01:24:30.454062723Z" level=info msg="StartContainer for \"f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8\"" Mar 11 01:24:30.483575 systemd[1]: Started cri-containerd-f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8.scope - libcontainer container f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8. Mar 11 01:24:30.508163 containerd[1710]: time="2026-03-11T01:24:30.508053285Z" level=info msg="StartContainer for \"f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8\" returns successfully" Mar 11 01:24:31.436958 systemd[1]: run-containerd-runc-k8s.io-f51d3e9fb2c7ccce14a72d2d22a6fa626842d6fb02ae94c49587460b8a5fc5c8-runc.AeJED2.mount: Deactivated successfully. Mar 11 01:24:31.501539 kubelet[3163]: I0311 01:24:31.500886 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-dlj58" podStartSLOduration=1.8448194409999998 podStartE2EDuration="4.500872256s" podCreationTimestamp="2026-03-11 01:24:27 +0000 UTC" firstStartedPulling="2026-03-11 01:24:27.735336584 +0000 UTC m=+6.026638516" lastFinishedPulling="2026-03-11 01:24:30.391389399 +0000 UTC m=+8.682691331" observedRunningTime="2026-03-11 01:24:30.900082345 +0000 UTC m=+9.191384277" watchObservedRunningTime="2026-03-11 01:24:31.500872256 +0000 UTC m=+9.792174188" Mar 11 01:24:36.182938 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 11 01:24:36.260256 sshd[2214]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:36.264905 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:43728.service: Deactivated successfully. Mar 11 01:24:36.266400 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 01:24:36.267489 systemd[1]: session-9.scope: Consumed 4.984s CPU time, 153.1M memory peak, 0B memory swap peak. Mar 11 01:24:36.269735 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Mar 11 01:24:36.270896 systemd-logind[1683]: Removed session 9. Mar 11 01:24:41.423055 systemd[1]: Created slice kubepods-besteffort-pod78b69c07_9d00_42f0_9dad_c2147898a827.slice - libcontainer container kubepods-besteffort-pod78b69c07_9d00_42f0_9dad_c2147898a827.slice. Mar 11 01:24:41.522529 systemd[1]: Created slice kubepods-besteffort-podc54a11df_f8af_494d_bbc0_9391eca54083.slice - libcontainer container kubepods-besteffort-podc54a11df_f8af_494d_bbc0_9391eca54083.slice. Mar 11 01:24:41.527426 kubelet[3163]: I0311 01:24:41.527365 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78b69c07-9d00-42f0-9dad-c2147898a827-tigera-ca-bundle\") pod \"calico-typha-5cc864b697-4rx75\" (UID: \"78b69c07-9d00-42f0-9dad-c2147898a827\") " pod="calico-system/calico-typha-5cc864b697-4rx75" Mar 11 01:24:41.527426 kubelet[3163]: I0311 01:24:41.527400 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/78b69c07-9d00-42f0-9dad-c2147898a827-typha-certs\") pod \"calico-typha-5cc864b697-4rx75\" (UID: \"78b69c07-9d00-42f0-9dad-c2147898a827\") " pod="calico-system/calico-typha-5cc864b697-4rx75" Mar 11 01:24:41.527426 kubelet[3163]: I0311 01:24:41.527419 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7jh\" (UniqueName: \"kubernetes.io/projected/78b69c07-9d00-42f0-9dad-c2147898a827-kube-api-access-pm7jh\") pod \"calico-typha-5cc864b697-4rx75\" (UID: \"78b69c07-9d00-42f0-9dad-c2147898a827\") " pod="calico-system/calico-typha-5cc864b697-4rx75" Mar 11 01:24:41.624422 kubelet[3163]: E0311 01:24:41.624381 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:41.630458 kubelet[3163]: I0311 01:24:41.629563 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-var-lib-calico\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630458 kubelet[3163]: I0311 01:24:41.629594 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7r2\" (UniqueName: \"kubernetes.io/projected/c54a11df-f8af-494d-bbc0-9391eca54083-kube-api-access-bm7r2\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630458 kubelet[3163]: I0311 01:24:41.629627 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-cni-bin-dir\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630458 kubelet[3163]: I0311 01:24:41.629642 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-policysync\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630458 kubelet[3163]: I0311 01:24:41.629654 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-bpffs\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630634 kubelet[3163]: I0311 01:24:41.629670 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-var-run-calico\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630634 kubelet[3163]: I0311 01:24:41.629683 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-xtables-lock\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630634 kubelet[3163]: I0311 01:24:41.629707 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-cni-log-dir\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630634 kubelet[3163]: I0311 01:24:41.629740 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-nodeproc\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630634 kubelet[3163]: I0311 01:24:41.629780 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-cni-net-dir\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630740 kubelet[3163]: I0311 01:24:41.629795 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-flexvol-driver-host\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630740 kubelet[3163]: I0311 01:24:41.629811 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-lib-modules\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630740 kubelet[3163]: I0311 01:24:41.629826 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c54a11df-f8af-494d-bbc0-9391eca54083-sys-fs\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630740 kubelet[3163]: I0311 01:24:41.629841 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54a11df-f8af-494d-bbc0-9391eca54083-tigera-ca-bundle\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.630740 kubelet[3163]: I0311 01:24:41.629856 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c54a11df-f8af-494d-bbc0-9391eca54083-node-certs\") pod \"calico-node-m5rfl\" (UID: \"c54a11df-f8af-494d-bbc0-9391eca54083\") " pod="calico-system/calico-node-m5rfl" Mar 11 01:24:41.732759 kubelet[3163]: I0311 01:24:41.730589 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37965836-ef4b-4e97-87a3-07d4d846b05a-registration-dir\") pod \"csi-node-driver-4xdrq\" (UID: \"37965836-ef4b-4e97-87a3-07d4d846b05a\") " pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:41.732759 kubelet[3163]: I0311 01:24:41.730739 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37965836-ef4b-4e97-87a3-07d4d846b05a-kubelet-dir\") pod \"csi-node-driver-4xdrq\" (UID: \"37965836-ef4b-4e97-87a3-07d4d846b05a\") " pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:41.732759 kubelet[3163]: I0311 01:24:41.730756 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37965836-ef4b-4e97-87a3-07d4d846b05a-socket-dir\") pod \"csi-node-driver-4xdrq\" (UID: \"37965836-ef4b-4e97-87a3-07d4d846b05a\") " pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:41.732759 kubelet[3163]: I0311 01:24:41.730773 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37965836-ef4b-4e97-87a3-07d4d846b05a-varrun\") pod \"csi-node-driver-4xdrq\" (UID: \"37965836-ef4b-4e97-87a3-07d4d846b05a\") " pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:41.732759 kubelet[3163]: I0311 01:24:41.730819 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmz6\" (UniqueName: \"kubernetes.io/projected/37965836-ef4b-4e97-87a3-07d4d846b05a-kube-api-access-jpmz6\") pod \"csi-node-driver-4xdrq\" (UID: \"37965836-ef4b-4e97-87a3-07d4d846b05a\") " pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:41.732759 kubelet[3163]: E0311 01:24:41.732100 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.733666 kubelet[3163]: W0311 01:24:41.732119 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.732147 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.732305 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.733666 kubelet[3163]: W0311 01:24:41.732313 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.732321 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.732470 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.733666 kubelet[3163]: W0311 01:24:41.732479 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.732488 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.733666 kubelet[3163]: E0311 01:24:41.733295 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.733666 kubelet[3163]: W0311 01:24:41.733306 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.733863 kubelet[3163]: E0311 01:24:41.733320 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.734412 kubelet[3163]: E0311 01:24:41.734280 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.734412 kubelet[3163]: W0311 01:24:41.734294 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.734412 kubelet[3163]: E0311 01:24:41.734316 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.735450 kubelet[3163]: E0311 01:24:41.735350 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.735450 kubelet[3163]: W0311 01:24:41.735371 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.735450 kubelet[3163]: E0311 01:24:41.735386 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.735791 kubelet[3163]: E0311 01:24:41.735722 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.735791 kubelet[3163]: W0311 01:24:41.735736 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.735791 kubelet[3163]: E0311 01:24:41.735746 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.736123 kubelet[3163]: E0311 01:24:41.736041 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.736123 kubelet[3163]: W0311 01:24:41.736056 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.736123 kubelet[3163]: E0311 01:24:41.736066 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.736380 kubelet[3163]: E0311 01:24:41.736329 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.736380 kubelet[3163]: W0311 01:24:41.736342 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.736380 kubelet[3163]: E0311 01:24:41.736353 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.736752 kubelet[3163]: E0311 01:24:41.736686 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.736752 kubelet[3163]: W0311 01:24:41.736700 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.736752 kubelet[3163]: E0311 01:24:41.736712 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.737183 kubelet[3163]: E0311 01:24:41.737084 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.737183 kubelet[3163]: W0311 01:24:41.737104 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.737183 kubelet[3163]: E0311 01:24:41.737115 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.738203 kubelet[3163]: E0311 01:24:41.737409 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.738203 kubelet[3163]: W0311 01:24:41.737421 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.738203 kubelet[3163]: E0311 01:24:41.737444 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.740613 kubelet[3163]: E0311 01:24:41.740587 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.741147 kubelet[3163]: W0311 01:24:41.741125 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.741241 kubelet[3163]: E0311 01:24:41.741229 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.741633 containerd[1710]: time="2026-03-11T01:24:41.741532292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cc864b697-4rx75,Uid:78b69c07-9d00-42f0-9dad-c2147898a827,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:41.745440 kubelet[3163]: E0311 01:24:41.745411 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.745514 kubelet[3163]: W0311 01:24:41.745444 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.745514 kubelet[3163]: E0311 01:24:41.745460 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.757913 kubelet[3163]: E0311 01:24:41.757854 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.757913 kubelet[3163]: W0311 01:24:41.757871 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.757913 kubelet[3163]: E0311 01:24:41.757883 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.787651 containerd[1710]: time="2026-03-11T01:24:41.787516942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:41.787651 containerd[1710]: time="2026-03-11T01:24:41.787599182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:41.787897 containerd[1710]: time="2026-03-11T01:24:41.787748982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:41.788115 containerd[1710]: time="2026-03-11T01:24:41.788063902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:41.805599 systemd[1]: Started cri-containerd-7c70f0a7beeda9dda9fa6b0dfb172ef7be2cf6feb099f8e637ce1b9334c03f80.scope - libcontainer container 7c70f0a7beeda9dda9fa6b0dfb172ef7be2cf6feb099f8e637ce1b9334c03f80. Mar 11 01:24:41.831556 kubelet[3163]: E0311 01:24:41.831535 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.832572 kubelet[3163]: W0311 01:24:41.831702 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.832572 kubelet[3163]: E0311 01:24:41.831726 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.832572 kubelet[3163]: E0311 01:24:41.832130 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.832572 kubelet[3163]: W0311 01:24:41.832278 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.832572 kubelet[3163]: E0311 01:24:41.832291 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.832914 kubelet[3163]: E0311 01:24:41.832744 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.832914 kubelet[3163]: W0311 01:24:41.832756 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.832914 kubelet[3163]: E0311 01:24:41.832768 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.833680 kubelet[3163]: E0311 01:24:41.833659 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.833680 kubelet[3163]: W0311 01:24:41.833675 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.833806 kubelet[3163]: E0311 01:24:41.833689 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.833876 kubelet[3163]: E0311 01:24:41.833862 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.833876 kubelet[3163]: W0311 01:24:41.833874 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.833961 kubelet[3163]: E0311 01:24:41.833885 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.834045 kubelet[3163]: E0311 01:24:41.834032 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.834045 kubelet[3163]: W0311 01:24:41.834043 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.834126 kubelet[3163]: E0311 01:24:41.834051 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.834274 kubelet[3163]: E0311 01:24:41.834259 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.834274 kubelet[3163]: W0311 01:24:41.834271 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.834407 kubelet[3163]: E0311 01:24:41.834282 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.834970 kubelet[3163]: E0311 01:24:41.834952 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.834970 kubelet[3163]: W0311 01:24:41.834968 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.835097 kubelet[3163]: E0311 01:24:41.834983 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.835199 kubelet[3163]: E0311 01:24:41.835185 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.835199 kubelet[3163]: W0311 01:24:41.835198 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.835313 kubelet[3163]: E0311 01:24:41.835208 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.835453 kubelet[3163]: E0311 01:24:41.835343 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.835453 kubelet[3163]: W0311 01:24:41.835350 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.835453 kubelet[3163]: E0311 01:24:41.835359 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.835678 kubelet[3163]: E0311 01:24:41.835598 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.835678 kubelet[3163]: W0311 01:24:41.835609 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.835678 kubelet[3163]: E0311 01:24:41.835620 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.836081 kubelet[3163]: E0311 01:24:41.836062 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.836081 kubelet[3163]: W0311 01:24:41.836078 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.836194 kubelet[3163]: E0311 01:24:41.836091 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.837076 kubelet[3163]: E0311 01:24:41.837056 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.837076 kubelet[3163]: W0311 01:24:41.837078 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.837233 kubelet[3163]: E0311 01:24:41.837096 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.837576 kubelet[3163]: E0311 01:24:41.837556 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.837576 kubelet[3163]: W0311 01:24:41.837572 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.837662 kubelet[3163]: E0311 01:24:41.837585 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.838278 containerd[1710]: time="2026-03-11T01:24:41.838244552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5rfl,Uid:c54a11df-f8af-494d-bbc0-9391eca54083,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:41.838511 kubelet[3163]: E0311 01:24:41.838493 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.838511 kubelet[3163]: W0311 01:24:41.838509 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.838606 kubelet[3163]: E0311 01:24:41.838522 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.839018 kubelet[3163]: E0311 01:24:41.838995 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.839018 kubelet[3163]: W0311 01:24:41.839015 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.839096 kubelet[3163]: E0311 01:24:41.839027 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.839315 kubelet[3163]: E0311 01:24:41.839221 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.839315 kubelet[3163]: W0311 01:24:41.839273 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.839315 kubelet[3163]: E0311 01:24:41.839284 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.839487 kubelet[3163]: E0311 01:24:41.839471 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.839487 kubelet[3163]: W0311 01:24:41.839481 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.839536 kubelet[3163]: E0311 01:24:41.839489 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.839848 kubelet[3163]: E0311 01:24:41.839648 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.839848 kubelet[3163]: W0311 01:24:41.839658 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.839848 kubelet[3163]: E0311 01:24:41.839666 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.839848 kubelet[3163]: E0311 01:24:41.839805 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.839848 kubelet[3163]: W0311 01:24:41.839813 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.839848 kubelet[3163]: E0311 01:24:41.839822 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.840022 kubelet[3163]: E0311 01:24:41.839949 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.840022 kubelet[3163]: W0311 01:24:41.839956 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.840022 kubelet[3163]: E0311 01:24:41.839964 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.840283 kubelet[3163]: E0311 01:24:41.840126 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.840283 kubelet[3163]: W0311 01:24:41.840137 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.840283 kubelet[3163]: E0311 01:24:41.840145 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.840283 kubelet[3163]: E0311 01:24:41.840283 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.840515 kubelet[3163]: W0311 01:24:41.840291 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.840515 kubelet[3163]: E0311 01:24:41.840298 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.840515 kubelet[3163]: E0311 01:24:41.840464 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.840515 kubelet[3163]: W0311 01:24:41.840473 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.840515 kubelet[3163]: E0311 01:24:41.840481 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.841895 kubelet[3163]: E0311 01:24:41.841542 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.841895 kubelet[3163]: W0311 01:24:41.841577 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.841895 kubelet[3163]: E0311 01:24:41.841593 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.847288 containerd[1710]: time="2026-03-11T01:24:41.847256314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cc864b697-4rx75,Uid:78b69c07-9d00-42f0-9dad-c2147898a827,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c70f0a7beeda9dda9fa6b0dfb172ef7be2cf6feb099f8e637ce1b9334c03f80\"" Mar 11 01:24:41.850197 containerd[1710]: time="2026-03-11T01:24:41.850018354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 01:24:41.856131 kubelet[3163]: E0311 01:24:41.856070 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:41.856131 kubelet[3163]: W0311 01:24:41.856086 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:41.856131 kubelet[3163]: E0311 01:24:41.856102 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:41.885919 containerd[1710]: time="2026-03-11T01:24:41.885621082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:41.885919 containerd[1710]: time="2026-03-11T01:24:41.885677682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:41.885919 containerd[1710]: time="2026-03-11T01:24:41.885692442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:41.885919 containerd[1710]: time="2026-03-11T01:24:41.885774882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:41.902636 systemd[1]: Started cri-containerd-347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6.scope - libcontainer container 347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6. Mar 11 01:24:41.922090 containerd[1710]: time="2026-03-11T01:24:41.922032249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5rfl,Uid:c54a11df-f8af-494d-bbc0-9391eca54083,Namespace:calico-system,Attempt:0,} returns sandbox id \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\"" Mar 11 01:24:42.820924 kubelet[3163]: E0311 01:24:42.820599 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:43.023224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918663334.mount: Deactivated successfully. Mar 11 01:24:43.505456 containerd[1710]: time="2026-03-11T01:24:43.505172936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.509341 containerd[1710]: time="2026-03-11T01:24:43.509316217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 11 01:24:43.515201 containerd[1710]: time="2026-03-11T01:24:43.515174698Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.520815 containerd[1710]: time="2026-03-11T01:24:43.520787259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:43.521559 containerd[1710]: time="2026-03-11T01:24:43.521531179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.670767704s" Mar 11 01:24:43.521592 containerd[1710]: time="2026-03-11T01:24:43.521561659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 11 01:24:43.524475 containerd[1710]: time="2026-03-11T01:24:43.523273979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 01:24:43.538931 containerd[1710]: time="2026-03-11T01:24:43.538892903Z" level=info msg="CreateContainer within sandbox \"7c70f0a7beeda9dda9fa6b0dfb172ef7be2cf6feb099f8e637ce1b9334c03f80\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 01:24:43.585303 containerd[1710]: time="2026-03-11T01:24:43.585213272Z" level=info msg="CreateContainer within sandbox \"7c70f0a7beeda9dda9fa6b0dfb172ef7be2cf6feb099f8e637ce1b9334c03f80\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d7aec0a99cd75a33db0d767485069f83ed9ce693b3d6984abcbee1fe569cbaa7\"" Mar 11 01:24:43.586176 containerd[1710]: time="2026-03-11T01:24:43.585667232Z" level=info msg="StartContainer for \"d7aec0a99cd75a33db0d767485069f83ed9ce693b3d6984abcbee1fe569cbaa7\"" Mar 11 01:24:43.614646 systemd[1]: Started cri-containerd-d7aec0a99cd75a33db0d767485069f83ed9ce693b3d6984abcbee1fe569cbaa7.scope - libcontainer container d7aec0a99cd75a33db0d767485069f83ed9ce693b3d6984abcbee1fe569cbaa7. Mar 11 01:24:43.653494 containerd[1710]: time="2026-03-11T01:24:43.653450166Z" level=info msg="StartContainer for \"d7aec0a99cd75a33db0d767485069f83ed9ce693b3d6984abcbee1fe569cbaa7\" returns successfully" Mar 11 01:24:43.928145 kubelet[3163]: E0311 01:24:43.927962 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.928145 kubelet[3163]: W0311 01:24:43.927987 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.928145 kubelet[3163]: E0311 01:24:43.928029 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.929639 kubelet[3163]: E0311 01:24:43.929518 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.929639 kubelet[3163]: W0311 01:24:43.929535 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.929639 kubelet[3163]: E0311 01:24:43.929548 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.930622 kubelet[3163]: E0311 01:24:43.930510 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.930622 kubelet[3163]: W0311 01:24:43.930526 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.930622 kubelet[3163]: E0311 01:24:43.930538 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.930783 kubelet[3163]: E0311 01:24:43.930773 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.930837 kubelet[3163]: W0311 01:24:43.930827 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.930890 kubelet[3163]: E0311 01:24:43.930881 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.931109 kubelet[3163]: E0311 01:24:43.931097 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.931250 kubelet[3163]: W0311 01:24:43.931168 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.931250 kubelet[3163]: E0311 01:24:43.931183 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.931371 kubelet[3163]: E0311 01:24:43.931361 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.931425 kubelet[3163]: W0311 01:24:43.931415 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.931562 kubelet[3163]: E0311 01:24:43.931550 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.931862 kubelet[3163]: E0311 01:24:43.931775 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.931862 kubelet[3163]: W0311 01:24:43.931786 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.931862 kubelet[3163]: E0311 01:24:43.931796 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.932009 kubelet[3163]: E0311 01:24:43.931999 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.932062 kubelet[3163]: W0311 01:24:43.932051 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.933596 kubelet[3163]: E0311 01:24:43.933472 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.933748 kubelet[3163]: E0311 01:24:43.933735 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.933809 kubelet[3163]: W0311 01:24:43.933798 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.933866 kubelet[3163]: E0311 01:24:43.933856 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.934140 kubelet[3163]: E0311 01:24:43.934127 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.934297 kubelet[3163]: W0311 01:24:43.934203 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.934297 kubelet[3163]: E0311 01:24:43.934219 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.934421 kubelet[3163]: E0311 01:24:43.934410 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.934487 kubelet[3163]: W0311 01:24:43.934476 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.934545 kubelet[3163]: E0311 01:24:43.934535 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.934764 kubelet[3163]: E0311 01:24:43.934752 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.934916 kubelet[3163]: W0311 01:24:43.934824 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.934916 kubelet[3163]: E0311 01:24:43.934839 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.935036 kubelet[3163]: E0311 01:24:43.935026 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.935094 kubelet[3163]: W0311 01:24:43.935084 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.935147 kubelet[3163]: E0311 01:24:43.935137 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.935461 kubelet[3163]: E0311 01:24:43.935357 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.935461 kubelet[3163]: W0311 01:24:43.935368 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.935461 kubelet[3163]: E0311 01:24:43.935378 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.935638 kubelet[3163]: E0311 01:24:43.935625 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.935694 kubelet[3163]: W0311 01:24:43.935684 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.935792 kubelet[3163]: E0311 01:24:43.935738 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.952696 kubelet[3163]: E0311 01:24:43.952642 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.952696 kubelet[3163]: W0311 01:24:43.952660 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.952696 kubelet[3163]: E0311 01:24:43.952674 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.953129 kubelet[3163]: E0311 01:24:43.953058 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.953129 kubelet[3163]: W0311 01:24:43.953071 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.953129 kubelet[3163]: E0311 01:24:43.953083 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.953385 kubelet[3163]: E0311 01:24:43.953369 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.953385 kubelet[3163]: W0311 01:24:43.953384 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.953465 kubelet[3163]: E0311 01:24:43.953396 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.953662 kubelet[3163]: E0311 01:24:43.953647 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.953662 kubelet[3163]: W0311 01:24:43.953660 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.953817 kubelet[3163]: E0311 01:24:43.953676 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.953908 kubelet[3163]: E0311 01:24:43.953892 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.953950 kubelet[3163]: W0311 01:24:43.953907 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.953950 kubelet[3163]: E0311 01:24:43.953919 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.954141 kubelet[3163]: E0311 01:24:43.954128 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.954186 kubelet[3163]: W0311 01:24:43.954141 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.954186 kubelet[3163]: E0311 01:24:43.954151 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.954519 kubelet[3163]: E0311 01:24:43.954402 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.954519 kubelet[3163]: W0311 01:24:43.954414 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.954519 kubelet[3163]: E0311 01:24:43.954425 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.954951 kubelet[3163]: E0311 01:24:43.954834 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.954951 kubelet[3163]: W0311 01:24:43.954852 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.954951 kubelet[3163]: E0311 01:24:43.954866 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.955566 kubelet[3163]: E0311 01:24:43.955476 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.955566 kubelet[3163]: W0311 01:24:43.955494 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.955566 kubelet[3163]: E0311 01:24:43.955507 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.956025 kubelet[3163]: E0311 01:24:43.955894 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.956025 kubelet[3163]: W0311 01:24:43.955906 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.956025 kubelet[3163]: E0311 01:24:43.955917 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.957749 kubelet[3163]: E0311 01:24:43.957632 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.957749 kubelet[3163]: W0311 01:24:43.957647 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.957749 kubelet[3163]: E0311 01:24:43.957661 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.958012 kubelet[3163]: E0311 01:24:43.957944 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.958012 kubelet[3163]: W0311 01:24:43.957956 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.958012 kubelet[3163]: E0311 01:24:43.957966 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.958344 kubelet[3163]: E0311 01:24:43.958277 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.958344 kubelet[3163]: W0311 01:24:43.958288 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.958344 kubelet[3163]: E0311 01:24:43.958299 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.958756 kubelet[3163]: E0311 01:24:43.958640 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.958756 kubelet[3163]: W0311 01:24:43.958653 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.958756 kubelet[3163]: E0311 01:24:43.958664 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.959121 kubelet[3163]: E0311 01:24:43.958969 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.959121 kubelet[3163]: W0311 01:24:43.958980 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.959121 kubelet[3163]: E0311 01:24:43.958992 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.959481 kubelet[3163]: E0311 01:24:43.959465 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.959545 kubelet[3163]: W0311 01:24:43.959482 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.959545 kubelet[3163]: E0311 01:24:43.959498 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.959877 kubelet[3163]: E0311 01:24:43.959778 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.959877 kubelet[3163]: W0311 01:24:43.959792 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.959877 kubelet[3163]: E0311 01:24:43.959803 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:43.960175 kubelet[3163]: E0311 01:24:43.960118 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:43.960175 kubelet[3163]: W0311 01:24:43.960132 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:43.960175 kubelet[3163]: E0311 01:24:43.960151 3163 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:44.747457 containerd[1710]: time="2026-03-11T01:24:44.747365472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.750650 containerd[1710]: time="2026-03-11T01:24:44.750531592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 11 01:24:44.754001 containerd[1710]: time="2026-03-11T01:24:44.753958313Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.758724 containerd[1710]: time="2026-03-11T01:24:44.758682514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.759768 containerd[1710]: time="2026-03-11T01:24:44.759652034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.236347095s" Mar 11 01:24:44.759768 containerd[1710]: time="2026-03-11T01:24:44.759682954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 11 01:24:44.768161 containerd[1710]: time="2026-03-11T01:24:44.768133316Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 01:24:44.803781 containerd[1710]: time="2026-03-11T01:24:44.803671363Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b\"" Mar 11 01:24:44.804587 containerd[1710]: time="2026-03-11T01:24:44.804407484Z" level=info msg="StartContainer for \"66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b\"" Mar 11 01:24:44.822529 kubelet[3163]: E0311 01:24:44.822032 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:44.840585 systemd[1]: Started cri-containerd-66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b.scope - libcontainer container 66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b. Mar 11 01:24:44.871014 containerd[1710]: time="2026-03-11T01:24:44.870906417Z" level=info msg="StartContainer for \"66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b\" returns successfully" Mar 11 01:24:44.875029 systemd[1]: cri-containerd-66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b.scope: Deactivated successfully. Mar 11 01:24:44.896975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b-rootfs.mount: Deactivated successfully. Mar 11 01:24:44.913571 kubelet[3163]: I0311 01:24:44.913546 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:24:45.066497 kubelet[3163]: I0311 01:24:44.933679 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5cc864b697-4rx75" podStartSLOduration=2.260788125 podStartE2EDuration="3.93366615s" podCreationTimestamp="2026-03-11 01:24:41 +0000 UTC" firstStartedPulling="2026-03-11 01:24:41.849568394 +0000 UTC m=+20.140870326" lastFinishedPulling="2026-03-11 01:24:43.522446419 +0000 UTC m=+21.813748351" observedRunningTime="2026-03-11 01:24:43.950228467 +0000 UTC m=+22.241530399" watchObservedRunningTime="2026-03-11 01:24:44.93366615 +0000 UTC m=+23.224968082" Mar 11 01:24:46.333328 containerd[1710]: time="2026-03-11T01:24:46.333138119Z" level=info msg="shim disconnected" id=66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b namespace=k8s.io Mar 11 01:24:46.333328 containerd[1710]: time="2026-03-11T01:24:46.333213999Z" level=warning msg="cleaning up after shim disconnected" id=66fafae6c7f4d4169c213e6c628d18b59091691322d47115771f4c56d027546b namespace=k8s.io Mar 11 01:24:46.333328 containerd[1710]: time="2026-03-11T01:24:46.333223399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:46.343084 containerd[1710]: time="2026-03-11T01:24:46.343040841Z" level=warning msg="cleanup warnings time=\"2026-03-11T01:24:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 11 01:24:46.820521 kubelet[3163]: E0311 01:24:46.820488 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:46.919481 containerd[1710]: time="2026-03-11T01:24:46.919012199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 01:24:48.820284 kubelet[3163]: E0311 01:24:48.820240 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:49.386620 kubelet[3163]: I0311 01:24:49.386589 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:24:50.821565 kubelet[3163]: E0311 01:24:50.821315 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:51.306605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242904291.mount: Deactivated successfully. Mar 11 01:24:51.383370 containerd[1710]: time="2026-03-11T01:24:51.382675195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.385881 containerd[1710]: time="2026-03-11T01:24:51.385853235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 11 01:24:51.392011 containerd[1710]: time="2026-03-11T01:24:51.391060595Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.395900 containerd[1710]: time="2026-03-11T01:24:51.395874195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:51.396477 containerd[1710]: time="2026-03-11T01:24:51.396382035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.477327475s" Mar 11 01:24:51.396569 containerd[1710]: time="2026-03-11T01:24:51.396554235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 11 01:24:51.406680 containerd[1710]: time="2026-03-11T01:24:51.406654355Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 01:24:51.455632 containerd[1710]: time="2026-03-11T01:24:51.455595355Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71\"" Mar 11 01:24:51.457006 containerd[1710]: time="2026-03-11T01:24:51.456985395Z" level=info msg="StartContainer for \"3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71\"" Mar 11 01:24:51.486572 systemd[1]: Started cri-containerd-3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71.scope - libcontainer container 3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71. Mar 11 01:24:51.515175 containerd[1710]: time="2026-03-11T01:24:51.515136395Z" level=info msg="StartContainer for \"3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71\" returns successfully" Mar 11 01:24:51.546571 systemd[1]: cri-containerd-3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71.scope: Deactivated successfully. Mar 11 01:24:52.306747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71-rootfs.mount: Deactivated successfully. Mar 11 01:24:52.821248 kubelet[3163]: E0311 01:24:52.820113 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:53.166217 containerd[1710]: time="2026-03-11T01:24:53.165888470Z" level=info msg="shim disconnected" id=3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71 namespace=k8s.io Mar 11 01:24:53.166217 containerd[1710]: time="2026-03-11T01:24:53.165957710Z" level=warning msg="cleaning up after shim disconnected" id=3bf3c6b498fcfc1c1179d7293a78b72e67f0eb03f1ddd3c46558a1893a1b9a71 namespace=k8s.io Mar 11 01:24:53.166217 containerd[1710]: time="2026-03-11T01:24:53.165966310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:53.935403 containerd[1710]: time="2026-03-11T01:24:53.935180907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 01:24:54.820351 kubelet[3163]: E0311 01:24:54.820283 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:56.374189 containerd[1710]: time="2026-03-11T01:24:56.374136420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:56.377460 containerd[1710]: time="2026-03-11T01:24:56.377410580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 11 01:24:56.381761 containerd[1710]: time="2026-03-11T01:24:56.381408180Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:56.386039 containerd[1710]: time="2026-03-11T01:24:56.386012380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:56.386796 containerd[1710]: time="2026-03-11T01:24:56.386766940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.451547393s" Mar 11 01:24:56.386854 containerd[1710]: time="2026-03-11T01:24:56.386798420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 11 01:24:56.395332 containerd[1710]: time="2026-03-11T01:24:56.395300820Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 01:24:56.435457 containerd[1710]: time="2026-03-11T01:24:56.435286860Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca\"" Mar 11 01:24:56.436614 containerd[1710]: time="2026-03-11T01:24:56.436273220Z" level=info msg="StartContainer for \"926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca\"" Mar 11 01:24:56.466588 systemd[1]: Started cri-containerd-926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca.scope - libcontainer container 926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca. Mar 11 01:24:56.496764 containerd[1710]: time="2026-03-11T01:24:56.496718780Z" level=info msg="StartContainer for \"926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca\" returns successfully" Mar 11 01:24:56.820815 kubelet[3163]: E0311 01:24:56.820767 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:57.739288 containerd[1710]: time="2026-03-11T01:24:57.739223455Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 01:24:57.741308 systemd[1]: cri-containerd-926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca.scope: Deactivated successfully. Mar 11 01:24:57.760812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca-rootfs.mount: Deactivated successfully. Mar 11 01:24:57.780452 kubelet[3163]: I0311 01:24:57.777416 3163 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 11 01:24:58.618367 systemd[1]: Created slice kubepods-burstable-pod560bb52c_64f3_4776_995a_2c4cdca92a07.slice - libcontainer container kubepods-burstable-pod560bb52c_64f3_4776_995a_2c4cdca92a07.slice. Mar 11 01:24:58.624832 systemd[1]: Created slice kubepods-burstable-pod19a36673_c7f4_4cd1_baa1_3ba9ae7cc00c.slice - libcontainer container kubepods-burstable-pod19a36673_c7f4_4cd1_baa1_3ba9ae7cc00c.slice. Mar 11 01:24:58.625987 containerd[1710]: time="2026-03-11T01:24:58.625767755Z" level=info msg="shim disconnected" id=926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca namespace=k8s.io Mar 11 01:24:58.625987 containerd[1710]: time="2026-03-11T01:24:58.625820795Z" level=warning msg="cleaning up after shim disconnected" id=926ea3552f86820d1bcfec1f3eef06546173e2c1a0ab320eb883b49498c137ca namespace=k8s.io Mar 11 01:24:58.625987 containerd[1710]: time="2026-03-11T01:24:58.625829195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:58.640682 systemd[1]: Created slice kubepods-besteffort-pod08e128c7_b7fc_48ab_8314_d723695bfcf7.slice - libcontainer container kubepods-besteffort-pod08e128c7_b7fc_48ab_8314_d723695bfcf7.slice. Mar 11 01:24:58.644139 kubelet[3163]: I0311 01:24:58.644108 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdfg\" (UniqueName: \"kubernetes.io/projected/560bb52c-64f3-4776-995a-2c4cdca92a07-kube-api-access-pqdfg\") pod \"coredns-7d764666f9-bphv7\" (UID: \"560bb52c-64f3-4776-995a-2c4cdca92a07\") " pod="kube-system/coredns-7d764666f9-bphv7" Mar 11 01:24:58.644508 kubelet[3163]: I0311 01:24:58.644387 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/560bb52c-64f3-4776-995a-2c4cdca92a07-config-volume\") pod \"coredns-7d764666f9-bphv7\" (UID: \"560bb52c-64f3-4776-995a-2c4cdca92a07\") " pod="kube-system/coredns-7d764666f9-bphv7" Mar 11 01:24:58.644560 containerd[1710]: time="2026-03-11T01:24:58.644473990Z" level=warning msg="cleanup warnings time=\"2026-03-11T01:24:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 11 01:24:58.645971 kubelet[3163]: I0311 01:24:58.644427 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ngr\" (UniqueName: \"kubernetes.io/projected/19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c-kube-api-access-b2ngr\") pod \"coredns-7d764666f9-bwhjs\" (UID: \"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c\") " pod="kube-system/coredns-7d764666f9-bwhjs" Mar 11 01:24:58.645971 kubelet[3163]: I0311 01:24:58.645332 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c-config-volume\") pod \"coredns-7d764666f9-bwhjs\" (UID: \"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c\") " pod="kube-system/coredns-7d764666f9-bwhjs" Mar 11 01:24:58.652369 systemd[1]: Created slice kubepods-besteffort-poddf051263_45b2_4a8e_8bd1_b2a4cb98fb5f.slice - libcontainer container kubepods-besteffort-poddf051263_45b2_4a8e_8bd1_b2a4cb98fb5f.slice. Mar 11 01:24:58.658828 systemd[1]: Created slice kubepods-besteffort-podcf40c38e_e681_42e6_9e14_7a8a99367a10.slice - libcontainer container kubepods-besteffort-podcf40c38e_e681_42e6_9e14_7a8a99367a10.slice. Mar 11 01:24:58.667577 systemd[1]: Created slice kubepods-besteffort-pod464d6928_5de2_4f22_96ff_832817a85459.slice - libcontainer container kubepods-besteffort-pod464d6928_5de2_4f22_96ff_832817a85459.slice. Mar 11 01:24:58.674210 systemd[1]: Created slice kubepods-besteffort-pod17b44239_9f88_4e92_b7af_79e5bf77ec3d.slice - libcontainer container kubepods-besteffort-pod17b44239_9f88_4e92_b7af_79e5bf77ec3d.slice. Mar 11 01:24:58.745920 kubelet[3163]: I0311 01:24:58.745877 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf40c38e-e681-42e6-9e14-7a8a99367a10-calico-apiserver-certs\") pod \"calico-apiserver-5bffd5d454-899bq\" (UID: \"cf40c38e-e681-42e6-9e14-7a8a99367a10\") " pod="calico-system/calico-apiserver-5bffd5d454-899bq" Mar 11 01:24:58.745920 kubelet[3163]: I0311 01:24:58.745918 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-whisker-ca-bundle\") pod \"whisker-69548b5b65-pfgf4\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:58.746082 kubelet[3163]: I0311 01:24:58.745949 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e128c7-b7fc-48ab-8314-d723695bfcf7-tigera-ca-bundle\") pod \"calico-kube-controllers-567d7d4557-rm6ck\" (UID: \"08e128c7-b7fc-48ab-8314-d723695bfcf7\") " pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" Mar 11 01:24:58.746082 kubelet[3163]: I0311 01:24:58.745964 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn68n\" (UniqueName: \"kubernetes.io/projected/08e128c7-b7fc-48ab-8314-d723695bfcf7-kube-api-access-pn68n\") pod \"calico-kube-controllers-567d7d4557-rm6ck\" (UID: \"08e128c7-b7fc-48ab-8314-d723695bfcf7\") " pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" Mar 11 01:24:58.746082 kubelet[3163]: I0311 01:24:58.745984 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfl56\" (UniqueName: \"kubernetes.io/projected/cf40c38e-e681-42e6-9e14-7a8a99367a10-kube-api-access-tfl56\") pod \"calico-apiserver-5bffd5d454-899bq\" (UID: \"cf40c38e-e681-42e6-9e14-7a8a99367a10\") " pod="calico-system/calico-apiserver-5bffd5d454-899bq" Mar 11 01:24:58.746082 kubelet[3163]: I0311 01:24:58.745999 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bjn\" (UniqueName: \"kubernetes.io/projected/464d6928-5de2-4f22-96ff-832817a85459-kube-api-access-g5bjn\") pod \"whisker-69548b5b65-pfgf4\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:58.746082 kubelet[3163]: I0311 01:24:58.746013 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhj7\" (UniqueName: \"kubernetes.io/projected/df051263-45b2-4a8e-8bd1-b2a4cb98fb5f-kube-api-access-rrhj7\") pod \"goldmane-9f7667bb8-6qt4g\" (UID: \"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f\") " pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:58.746203 kubelet[3163]: I0311 01:24:58.746035 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-nginx-config\") pod \"whisker-69548b5b65-pfgf4\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:58.746203 kubelet[3163]: I0311 01:24:58.746051 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/464d6928-5de2-4f22-96ff-832817a85459-whisker-backend-key-pair\") pod \"whisker-69548b5b65-pfgf4\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:58.746203 kubelet[3163]: I0311 01:24:58.746064 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df051263-45b2-4a8e-8bd1-b2a4cb98fb5f-config\") pod \"goldmane-9f7667bb8-6qt4g\" (UID: \"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f\") " pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:58.746203 kubelet[3163]: I0311 01:24:58.746078 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df051263-45b2-4a8e-8bd1-b2a4cb98fb5f-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-6qt4g\" (UID: \"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f\") " pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:58.746203 kubelet[3163]: I0311 01:24:58.746095 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/df051263-45b2-4a8e-8bd1-b2a4cb98fb5f-goldmane-key-pair\") pod \"goldmane-9f7667bb8-6qt4g\" (UID: \"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f\") " pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:58.746312 kubelet[3163]: I0311 01:24:58.746112 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17b44239-9f88-4e92-b7af-79e5bf77ec3d-calico-apiserver-certs\") pod \"calico-apiserver-5bffd5d454-9txnp\" (UID: \"17b44239-9f88-4e92-b7af-79e5bf77ec3d\") " pod="calico-system/calico-apiserver-5bffd5d454-9txnp" Mar 11 01:24:58.746312 kubelet[3163]: I0311 01:24:58.746137 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8nhp\" (UniqueName: \"kubernetes.io/projected/17b44239-9f88-4e92-b7af-79e5bf77ec3d-kube-api-access-x8nhp\") pod \"calico-apiserver-5bffd5d454-9txnp\" (UID: \"17b44239-9f88-4e92-b7af-79e5bf77ec3d\") " pod="calico-system/calico-apiserver-5bffd5d454-9txnp" Mar 11 01:24:58.825895 systemd[1]: Created slice kubepods-besteffort-pod37965836_ef4b_4e97_87a3_07d4d846b05a.slice - libcontainer container kubepods-besteffort-pod37965836_ef4b_4e97_87a3_07d4d846b05a.slice. Mar 11 01:24:58.835038 containerd[1710]: time="2026-03-11T01:24:58.834678383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xdrq,Uid:37965836-ef4b-4e97-87a3-07d4d846b05a,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:58.930508 containerd[1710]: time="2026-03-11T01:24:58.929051320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bphv7,Uid:560bb52c-64f3-4776-995a-2c4cdca92a07,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:58.939052 containerd[1710]: time="2026-03-11T01:24:58.939003397Z" level=error msg="Failed to destroy network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:58.939308 containerd[1710]: time="2026-03-11T01:24:58.939280557Z" level=error msg="encountered an error cleaning up failed sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:58.940115 containerd[1710]: time="2026-03-11T01:24:58.939333277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xdrq,Uid:37965836-ef4b-4e97-87a3-07d4d846b05a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:58.940604 kubelet[3163]: E0311 01:24:58.940572 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:58.940657 kubelet[3163]: E0311 01:24:58.940624 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:58.940657 kubelet[3163]: E0311 01:24:58.940642 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xdrq" Mar 11 01:24:58.940704 kubelet[3163]: E0311 01:24:58.940689 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4xdrq_calico-system(37965836-ef4b-4e97-87a3-07d4d846b05a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4xdrq_calico-system(37965836-ef4b-4e97-87a3-07d4d846b05a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:58.941586 containerd[1710]: time="2026-03-11T01:24:58.941560677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bwhjs,Uid:19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:58.951584 kubelet[3163]: I0311 01:24:58.950936 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:24:58.967730 containerd[1710]: time="2026-03-11T01:24:58.966497430Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 01:24:58.967730 containerd[1710]: time="2026-03-11T01:24:58.966672430Z" level=info msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" Mar 11 01:24:58.967730 containerd[1710]: time="2026-03-11T01:24:58.966803910Z" level=info msg="Ensure that sandbox 47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8 in task-service has been cleanup successfully" Mar 11 01:24:58.968970 containerd[1710]: time="2026-03-11T01:24:58.968640630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qt4g,Uid:df051263-45b2-4a8e-8bd1-b2a4cb98fb5f,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:58.968970 containerd[1710]: time="2026-03-11T01:24:58.968855190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d7d4557-rm6ck,Uid:08e128c7-b7fc-48ab-8314-d723695bfcf7,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:58.973951 containerd[1710]: time="2026-03-11T01:24:58.973920229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-899bq,Uid:cf40c38e-e681-42e6-9e14-7a8a99367a10,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:58.979849 containerd[1710]: time="2026-03-11T01:24:58.979796867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69548b5b65-pfgf4,Uid:464d6928-5de2-4f22-96ff-832817a85459,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:58.987262 containerd[1710]: time="2026-03-11T01:24:58.987233985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-9txnp,Uid:17b44239-9f88-4e92-b7af-79e5bf77ec3d,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:59.002345 containerd[1710]: time="2026-03-11T01:24:59.002307341Z" level=error msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" failed" error="failed to destroy network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.002576 kubelet[3163]: E0311 01:24:59.002537 3163 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:24:59.002652 kubelet[3163]: E0311 01:24:59.002592 3163 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8"} Mar 11 01:24:59.002652 kubelet[3163]: E0311 01:24:59.002646 3163 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37965836-ef4b-4e97-87a3-07d4d846b05a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 01:24:59.002741 kubelet[3163]: E0311 01:24:59.002672 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37965836-ef4b-4e97-87a3-07d4d846b05a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xdrq" podUID="37965836-ef4b-4e97-87a3-07d4d846b05a" Mar 11 01:24:59.065221 containerd[1710]: time="2026-03-11T01:24:59.065014086Z" level=error msg="Failed to destroy network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.065390 containerd[1710]: time="2026-03-11T01:24:59.065351286Z" level=error msg="encountered an error cleaning up failed sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.065459 containerd[1710]: time="2026-03-11T01:24:59.065417126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bphv7,Uid:560bb52c-64f3-4776-995a-2c4cdca92a07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.065916 kubelet[3163]: E0311 01:24:59.065875 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.065987 kubelet[3163]: E0311 01:24:59.065931 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bphv7" Mar 11 01:24:59.065987 kubelet[3163]: E0311 01:24:59.065950 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bphv7" Mar 11 01:24:59.066033 kubelet[3163]: E0311 01:24:59.065998 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-bphv7_kube-system(560bb52c-64f3-4776-995a-2c4cdca92a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-bphv7_kube-system(560bb52c-64f3-4776-995a-2c4cdca92a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-bphv7" podUID="560bb52c-64f3-4776-995a-2c4cdca92a07" Mar 11 01:24:59.164218 containerd[1710]: time="2026-03-11T01:24:59.163498421Z" level=info msg="CreateContainer within sandbox \"347593c43aa1dae0c699a8b7cb60613df78f3bcebe0eb8659c9307970644d6f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0018079ec7eec7c9731d8f0242f341453c81de8383098e48b49da1be14e456ed\"" Mar 11 01:24:59.165637 containerd[1710]: time="2026-03-11T01:24:59.165524061Z" level=info msg="StartContainer for \"0018079ec7eec7c9731d8f0242f341453c81de8383098e48b49da1be14e456ed\"" Mar 11 01:24:59.221708 containerd[1710]: time="2026-03-11T01:24:59.220819047Z" level=error msg="Failed to destroy network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.223781 containerd[1710]: time="2026-03-11T01:24:59.223489447Z" level=error msg="encountered an error cleaning up failed sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.223866 containerd[1710]: time="2026-03-11T01:24:59.223805207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bwhjs,Uid:19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.224262 kubelet[3163]: E0311 01:24:59.224226 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.224331 kubelet[3163]: E0311 01:24:59.224285 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bwhjs" Mar 11 01:24:59.224331 kubelet[3163]: E0311 01:24:59.224306 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bwhjs" Mar 11 01:24:59.224388 kubelet[3163]: E0311 01:24:59.224368 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-bwhjs_kube-system(19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-bwhjs_kube-system(19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-bwhjs" podUID="19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c" Mar 11 01:24:59.246287 systemd[1]: Started cri-containerd-0018079ec7eec7c9731d8f0242f341453c81de8383098e48b49da1be14e456ed.scope - libcontainer container 0018079ec7eec7c9731d8f0242f341453c81de8383098e48b49da1be14e456ed. Mar 11 01:24:59.300843 containerd[1710]: time="2026-03-11T01:24:59.300798787Z" level=info msg="StartContainer for \"0018079ec7eec7c9731d8f0242f341453c81de8383098e48b49da1be14e456ed\" returns successfully" Mar 11 01:24:59.343205 containerd[1710]: time="2026-03-11T01:24:59.343157017Z" level=error msg="Failed to destroy network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.344942 containerd[1710]: time="2026-03-11T01:24:59.344805777Z" level=error msg="encountered an error cleaning up failed sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.345078 containerd[1710]: time="2026-03-11T01:24:59.345055856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qt4g,Uid:df051263-45b2-4a8e-8bd1-b2a4cb98fb5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.346705 kubelet[3163]: E0311 01:24:59.346666 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.346787 kubelet[3163]: E0311 01:24:59.346722 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:59.346787 kubelet[3163]: E0311 01:24:59.346741 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6qt4g" Mar 11 01:24:59.346833 kubelet[3163]: E0311 01:24:59.346807 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-6qt4g_calico-system(df051263-45b2-4a8e-8bd1-b2a4cb98fb5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-6qt4g_calico-system(df051263-45b2-4a8e-8bd1-b2a4cb98fb5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-6qt4g" podUID="df051263-45b2-4a8e-8bd1-b2a4cb98fb5f" Mar 11 01:24:59.358221 containerd[1710]: time="2026-03-11T01:24:59.358176533Z" level=error msg="Failed to destroy network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.362745 containerd[1710]: time="2026-03-11T01:24:59.362620292Z" level=error msg="Failed to destroy network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.363704 containerd[1710]: time="2026-03-11T01:24:59.362936332Z" level=error msg="encountered an error cleaning up failed sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.363704 containerd[1710]: time="2026-03-11T01:24:59.362990012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-899bq,Uid:cf40c38e-e681-42e6-9e14-7a8a99367a10,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.364269 kubelet[3163]: E0311 01:24:59.363918 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.364269 kubelet[3163]: E0311 01:24:59.363981 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bffd5d454-899bq" Mar 11 01:24:59.364269 kubelet[3163]: E0311 01:24:59.364001 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bffd5d454-899bq" Mar 11 01:24:59.364398 kubelet[3163]: E0311 01:24:59.364050 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bffd5d454-899bq_calico-system(cf40c38e-e681-42e6-9e14-7a8a99367a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bffd5d454-899bq_calico-system(cf40c38e-e681-42e6-9e14-7a8a99367a10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bffd5d454-899bq" podUID="cf40c38e-e681-42e6-9e14-7a8a99367a10" Mar 11 01:24:59.366145 containerd[1710]: time="2026-03-11T01:24:59.366108931Z" level=error msg="encountered an error cleaning up failed sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.366960 containerd[1710]: time="2026-03-11T01:24:59.366925251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69548b5b65-pfgf4,Uid:464d6928-5de2-4f22-96ff-832817a85459,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.367747 kubelet[3163]: E0311 01:24:59.367608 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.367747 kubelet[3163]: E0311 01:24:59.367646 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:59.367747 kubelet[3163]: E0311 01:24:59.367671 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69548b5b65-pfgf4" Mar 11 01:24:59.367865 kubelet[3163]: E0311 01:24:59.367710 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69548b5b65-pfgf4_calico-system(464d6928-5de2-4f22-96ff-832817a85459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69548b5b65-pfgf4_calico-system(464d6928-5de2-4f22-96ff-832817a85459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69548b5b65-pfgf4" podUID="464d6928-5de2-4f22-96ff-832817a85459" Mar 11 01:24:59.378546 containerd[1710]: time="2026-03-11T01:24:59.378515488Z" level=error msg="Failed to destroy network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.378899 containerd[1710]: time="2026-03-11T01:24:59.378872608Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.379068 containerd[1710]: time="2026-03-11T01:24:59.378985488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-9txnp,Uid:17b44239-9f88-4e92-b7af-79e5bf77ec3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.379179 kubelet[3163]: E0311 01:24:59.379145 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.379217 kubelet[3163]: E0311 01:24:59.379178 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bffd5d454-9txnp" Mar 11 01:24:59.379217 kubelet[3163]: E0311 01:24:59.379192 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bffd5d454-9txnp" Mar 11 01:24:59.379269 kubelet[3163]: E0311 01:24:59.379229 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bffd5d454-9txnp_calico-system(17b44239-9f88-4e92-b7af-79e5bf77ec3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bffd5d454-9txnp_calico-system(17b44239-9f88-4e92-b7af-79e5bf77ec3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bffd5d454-9txnp" podUID="17b44239-9f88-4e92-b7af-79e5bf77ec3d" Mar 11 01:24:59.379810 containerd[1710]: time="2026-03-11T01:24:59.379614208Z" level=error msg="Failed to destroy network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.380210 containerd[1710]: time="2026-03-11T01:24:59.380183048Z" level=error msg="encountered an error cleaning up failed sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.380548 containerd[1710]: time="2026-03-11T01:24:59.380285488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d7d4557-rm6ck,Uid:08e128c7-b7fc-48ab-8314-d723695bfcf7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.380984 kubelet[3163]: E0311 01:24:59.380773 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:59.380984 kubelet[3163]: E0311 01:24:59.380806 3163 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" Mar 11 01:24:59.380984 kubelet[3163]: E0311 01:24:59.380916 3163 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" Mar 11 01:24:59.381096 kubelet[3163]: E0311 01:24:59.380958 3163 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-567d7d4557-rm6ck_calico-system(08e128c7-b7fc-48ab-8314-d723695bfcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-567d7d4557-rm6ck_calico-system(08e128c7-b7fc-48ab-8314-d723695bfcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" podUID="08e128c7-b7fc-48ab-8314-d723695bfcf7" Mar 11 01:24:59.859305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8-shm.mount: Deactivated successfully. Mar 11 01:24:59.954292 kubelet[3163]: I0311 01:24:59.954215 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:24:59.955918 containerd[1710]: time="2026-03-11T01:24:59.955397505Z" level=info msg="StopPodSandbox for \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\"" Mar 11 01:24:59.955918 containerd[1710]: time="2026-03-11T01:24:59.955581985Z" level=info msg="Ensure that sandbox 6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0 in task-service has been cleanup successfully" Mar 11 01:24:59.962108 kubelet[3163]: I0311 01:24:59.961780 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:24:59.964353 containerd[1710]: time="2026-03-11T01:24:59.964025903Z" level=info msg="StopPodSandbox for \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\"" Mar 11 01:24:59.964353 containerd[1710]: time="2026-03-11T01:24:59.964179103Z" level=info msg="Ensure that sandbox ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45 in task-service has been cleanup successfully" Mar 11 01:24:59.965035 kubelet[3163]: I0311 01:24:59.964767 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:24:59.965657 containerd[1710]: time="2026-03-11T01:24:59.965631503Z" level=info msg="StopPodSandbox for \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\"" Mar 11 01:24:59.965802 containerd[1710]: time="2026-03-11T01:24:59.965779183Z" level=info msg="Ensure that sandbox 9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2 in task-service has been cleanup successfully" Mar 11 01:24:59.971073 kubelet[3163]: I0311 01:24:59.971051 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:24:59.971936 containerd[1710]: time="2026-03-11T01:24:59.971909621Z" level=info msg="StopPodSandbox for \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\"" Mar 11 01:24:59.974560 containerd[1710]: time="2026-03-11T01:24:59.974335460Z" level=info msg="Ensure that sandbox 74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7 in task-service has been cleanup successfully" Mar 11 01:24:59.981032 kubelet[3163]: I0311 01:24:59.980741 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:24:59.990415 containerd[1710]: time="2026-03-11T01:24:59.989649497Z" level=info msg="StopPodSandbox for \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\"" Mar 11 01:24:59.990415 containerd[1710]: time="2026-03-11T01:24:59.989809897Z" level=info msg="Ensure that sandbox d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6 in task-service has been cleanup successfully" Mar 11 01:24:59.994782 kubelet[3163]: I0311 01:24:59.994666 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-m5rfl" podStartSLOduration=1.9697465109999999 podStartE2EDuration="18.994655095s" podCreationTimestamp="2026-03-11 01:24:41 +0000 UTC" firstStartedPulling="2026-03-11 01:24:41.92485689 +0000 UTC m=+20.216158822" lastFinishedPulling="2026-03-11 01:24:58.949765474 +0000 UTC m=+37.241067406" observedRunningTime="2026-03-11 01:24:59.989356617 +0000 UTC m=+38.280658549" watchObservedRunningTime="2026-03-11 01:24:59.994655095 +0000 UTC m=+38.285957027" Mar 11 01:24:59.997911 kubelet[3163]: I0311 01:24:59.996470 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:24:59.997983 containerd[1710]: time="2026-03-11T01:24:59.997569215Z" level=info msg="StopPodSandbox for \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\"" Mar 11 01:24:59.997983 containerd[1710]: time="2026-03-11T01:24:59.997719415Z" level=info msg="Ensure that sandbox 38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a in task-service has been cleanup successfully" Mar 11 01:24:59.999198 kubelet[3163]: I0311 01:24:59.999181 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:00.000667 containerd[1710]: time="2026-03-11T01:25:00.000635974Z" level=info msg="StopPodSandbox for \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\"" Mar 11 01:25:00.002700 containerd[1710]: time="2026-03-11T01:25:00.002666533Z" level=info msg="Ensure that sandbox dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef in task-service has been cleanup successfully" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.139 [INFO][4325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.141 [INFO][4325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" iface="eth0" netns="/var/run/netns/cni-c3ae1584-d873-136c-1c94-dedc3823e6ee" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.144 [INFO][4325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" iface="eth0" netns="/var/run/netns/cni-c3ae1584-d873-136c-1c94-dedc3823e6ee" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.144 [INFO][4325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" iface="eth0" netns="/var/run/netns/cni-c3ae1584-d873-136c-1c94-dedc3823e6ee" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.144 [INFO][4325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.144 [INFO][4325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.184 [INFO][4406] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.185 [INFO][4406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.185 [INFO][4406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.236 [WARNING][4406] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.236 [INFO][4406] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.244 [INFO][4406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.258527 containerd[1710]: 2026-03-11 01:25:00.248 [INFO][4325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:00.259307 systemd[1]: run-netns-cni\x2dc3ae1584\x2dd873\x2d136c\x2d1c94\x2ddedc3823e6ee.mount: Deactivated successfully. Mar 11 01:25:00.260883 containerd[1710]: time="2026-03-11T01:25:00.260843549Z" level=info msg="TearDown network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" successfully" Mar 11 01:25:00.260999 containerd[1710]: time="2026-03-11T01:25:00.260985109Z" level=info msg="StopPodSandbox for \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" returns successfully" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.104 [INFO][4289] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.104 [INFO][4289] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" iface="eth0" netns="/var/run/netns/cni-281382f7-1581-4f42-d8f8-a15ccffa30f9" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.105 [INFO][4289] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" iface="eth0" netns="/var/run/netns/cni-281382f7-1581-4f42-d8f8-a15ccffa30f9" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.105 [INFO][4289] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" iface="eth0" netns="/var/run/netns/cni-281382f7-1581-4f42-d8f8-a15ccffa30f9" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.105 [INFO][4289] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.106 [INFO][4289] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.213 [INFO][4397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.213 [INFO][4397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.252 [INFO][4397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.281 [WARNING][4397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.281 [INFO][4397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.291 [INFO][4397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.313533 containerd[1710]: 2026-03-11 01:25:00.298 [INFO][4289] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:00.317665 systemd[1]: run-netns-cni\x2d281382f7\x2d1581\x2d4f42\x2dd8f8\x2da15ccffa30f9.mount: Deactivated successfully. Mar 11 01:25:00.323178 containerd[1710]: time="2026-03-11T01:25:00.322846294Z" level=info msg="TearDown network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" successfully" Mar 11 01:25:00.323178 containerd[1710]: time="2026-03-11T01:25:00.322881414Z" level=info msg="StopPodSandbox for \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" returns successfully" Mar 11 01:25:00.330731 containerd[1710]: time="2026-03-11T01:25:00.330699412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qt4g,Uid:df051263-45b2-4a8e-8bd1-b2a4cb98fb5f,Namespace:calico-system,Attempt:1,}" Mar 11 01:25:00.359916 kubelet[3163]: I0311 01:25:00.359453 3163 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-whisker-ca-bundle\") pod \"464d6928-5de2-4f22-96ff-832817a85459\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " Mar 11 01:25:00.359916 kubelet[3163]: I0311 01:25:00.359493 3163 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-nginx-config\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-nginx-config\") pod \"464d6928-5de2-4f22-96ff-832817a85459\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " Mar 11 01:25:00.359916 kubelet[3163]: I0311 01:25:00.359520 3163 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/464d6928-5de2-4f22-96ff-832817a85459-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/464d6928-5de2-4f22-96ff-832817a85459-whisker-backend-key-pair\") pod \"464d6928-5de2-4f22-96ff-832817a85459\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " Mar 11 01:25:00.359916 kubelet[3163]: I0311 01:25:00.359539 3163 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/464d6928-5de2-4f22-96ff-832817a85459-kube-api-access-g5bjn\" (UniqueName: \"kubernetes.io/projected/464d6928-5de2-4f22-96ff-832817a85459-kube-api-access-g5bjn\") pod \"464d6928-5de2-4f22-96ff-832817a85459\" (UID: \"464d6928-5de2-4f22-96ff-832817a85459\") " Mar 11 01:25:00.361996 kubelet[3163]: I0311 01:25:00.361969 3163 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-nginx-config" pod "464d6928-5de2-4f22-96ff-832817a85459" (UID: "464d6928-5de2-4f22-96ff-832817a85459"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:25:00.362695 kubelet[3163]: I0311 01:25:00.362492 3163 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-whisker-ca-bundle" pod "464d6928-5de2-4f22-96ff-832817a85459" (UID: "464d6928-5de2-4f22-96ff-832817a85459"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:25:00.372381 kubelet[3163]: I0311 01:25:00.372352 3163 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464d6928-5de2-4f22-96ff-832817a85459-kube-api-access-g5bjn" pod "464d6928-5de2-4f22-96ff-832817a85459" (UID: "464d6928-5de2-4f22-96ff-832817a85459"). InnerVolumeSpecName "kube-api-access-g5bjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 01:25:00.373006 systemd[1]: var-lib-kubelet-pods-464d6928\x2d5de2\x2d4f22\x2d96ff\x2d832817a85459-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5bjn.mount: Deactivated successfully. Mar 11 01:25:00.373100 systemd[1]: var-lib-kubelet-pods-464d6928\x2d5de2\x2d4f22\x2d96ff\x2d832817a85459-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 01:25:00.373947 kubelet[3163]: I0311 01:25:00.373697 3163 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464d6928-5de2-4f22-96ff-832817a85459-whisker-backend-key-pair" pod "464d6928-5de2-4f22-96ff-832817a85459" (UID: "464d6928-5de2-4f22-96ff-832817a85459"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.202 [INFO][4366] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.202 [INFO][4366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" iface="eth0" netns="/var/run/netns/cni-c05ee33b-aaef-1050-f53a-a7ea74104669" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.202 [INFO][4366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" iface="eth0" netns="/var/run/netns/cni-c05ee33b-aaef-1050-f53a-a7ea74104669" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.202 [INFO][4366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" iface="eth0" netns="/var/run/netns/cni-c05ee33b-aaef-1050-f53a-a7ea74104669" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.202 [INFO][4366] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.203 [INFO][4366] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.286 [INFO][4419] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.286 [INFO][4419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.294 [INFO][4419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.346 [WARNING][4419] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.346 [INFO][4419] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.352 [INFO][4419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.381649 containerd[1710]: 2026-03-11 01:25:00.361 [INFO][4366] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:00.384841 containerd[1710]: time="2026-03-11T01:25:00.384667519Z" level=info msg="TearDown network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" successfully" Mar 11 01:25:00.384841 containerd[1710]: time="2026-03-11T01:25:00.384704039Z" level=info msg="StopPodSandbox for \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" returns successfully" Mar 11 01:25:00.396358 containerd[1710]: time="2026-03-11T01:25:00.394527436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d7d4557-rm6ck,Uid:08e128c7-b7fc-48ab-8314-d723695bfcf7,Namespace:calico-system,Attempt:1,}" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.228 [INFO][4348] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.228 [INFO][4348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" iface="eth0" netns="/var/run/netns/cni-f816cfcc-0542-e9b6-5f2e-4456bb4a16be" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.229 [INFO][4348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" iface="eth0" netns="/var/run/netns/cni-f816cfcc-0542-e9b6-5f2e-4456bb4a16be" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.229 [INFO][4348] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" iface="eth0" netns="/var/run/netns/cni-f816cfcc-0542-e9b6-5f2e-4456bb4a16be" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.229 [INFO][4348] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.229 [INFO][4348] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.390 [INFO][4425] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.391 [INFO][4425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.391 [INFO][4425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.425 [WARNING][4425] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.425 [INFO][4425] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.429 [INFO][4425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.433884 containerd[1710]: 2026-03-11 01:25:00.432 [INFO][4348] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:00.435424 containerd[1710]: time="2026-03-11T01:25:00.435353226Z" level=info msg="TearDown network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" successfully" Mar 11 01:25:00.435424 containerd[1710]: time="2026-03-11T01:25:00.435380106Z" level=info msg="StopPodSandbox for \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" returns successfully" Mar 11 01:25:00.443582 containerd[1710]: time="2026-03-11T01:25:00.442466384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-899bq,Uid:cf40c38e-e681-42e6-9e14-7a8a99367a10,Namespace:calico-system,Attempt:1,}" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.236 [INFO][4374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.237 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" iface="eth0" netns="/var/run/netns/cni-e543eec2-40f1-3e28-1006-909844bfdf0a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.237 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" iface="eth0" netns="/var/run/netns/cni-e543eec2-40f1-3e28-1006-909844bfdf0a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.237 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" iface="eth0" netns="/var/run/netns/cni-e543eec2-40f1-3e28-1006-909844bfdf0a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.237 [INFO][4374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.237 [INFO][4374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.409 [INFO][4430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.413 [INFO][4430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.430 [INFO][4430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.445 [WARNING][4430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.445 [INFO][4430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.447 [INFO][4430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.459746 containerd[1710]: 2026-03-11 01:25:00.451 [INFO][4374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:00.460362 kubelet[3163]: I0311 01:25:00.460297 3163 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-whisker-ca-bundle\") on node \"ci-4081.3.6-n-541af3988c\" DevicePath \"\"" Mar 11 01:25:00.460362 kubelet[3163]: I0311 01:25:00.460321 3163 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/464d6928-5de2-4f22-96ff-832817a85459-nginx-config\") on node \"ci-4081.3.6-n-541af3988c\" DevicePath \"\"" Mar 11 01:25:00.460362 kubelet[3163]: I0311 01:25:00.460331 3163 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/464d6928-5de2-4f22-96ff-832817a85459-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-541af3988c\" DevicePath \"\"" Mar 11 01:25:00.460362 kubelet[3163]: I0311 01:25:00.460344 3163 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g5bjn\" (UniqueName: \"kubernetes.io/projected/464d6928-5de2-4f22-96ff-832817a85459-kube-api-access-g5bjn\") on node \"ci-4081.3.6-n-541af3988c\" DevicePath \"\"" Mar 11 01:25:00.460798 containerd[1710]: time="2026-03-11T01:25:00.460771300Z" level=info msg="TearDown network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" successfully" Mar 11 01:25:00.460891 containerd[1710]: time="2026-03-11T01:25:00.460877420Z" level=info msg="StopPodSandbox for \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" returns successfully" Mar 11 01:25:00.468928 containerd[1710]: time="2026-03-11T01:25:00.468889218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bwhjs,Uid:19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c,Namespace:kube-system,Attempt:1,}" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.277 [INFO][4327] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.281 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" iface="eth0" netns="/var/run/netns/cni-756d39f7-d3e7-a05b-5790-b28fbd6cdb97" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.281 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" iface="eth0" netns="/var/run/netns/cni-756d39f7-d3e7-a05b-5790-b28fbd6cdb97" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.282 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" iface="eth0" netns="/var/run/netns/cni-756d39f7-d3e7-a05b-5790-b28fbd6cdb97" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.282 [INFO][4327] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.282 [INFO][4327] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.425 [INFO][4438] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.425 [INFO][4438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.451 [INFO][4438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.471 [WARNING][4438] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.471 [INFO][4438] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.474 [INFO][4438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.481571 containerd[1710]: 2026-03-11 01:25:00.477 [INFO][4327] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:00.481913 containerd[1710]: time="2026-03-11T01:25:00.481682295Z" level=info msg="TearDown network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" successfully" Mar 11 01:25:00.481913 containerd[1710]: time="2026-03-11T01:25:00.481704375Z" level=info msg="StopPodSandbox for \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" returns successfully" Mar 11 01:25:00.489475 containerd[1710]: time="2026-03-11T01:25:00.489424333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-9txnp,Uid:17b44239-9f88-4e92-b7af-79e5bf77ec3d,Namespace:calico-system,Attempt:1,}" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.308 [INFO][4384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.309 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" iface="eth0" netns="/var/run/netns/cni-65f85496-0630-12df-92a1-d315b694adf0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.310 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" iface="eth0" netns="/var/run/netns/cni-65f85496-0630-12df-92a1-d315b694adf0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.310 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" iface="eth0" netns="/var/run/netns/cni-65f85496-0630-12df-92a1-d315b694adf0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.310 [INFO][4384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.310 [INFO][4384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.429 [INFO][4443] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.429 [INFO][4443] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.474 [INFO][4443] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.493 [WARNING][4443] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.493 [INFO][4443] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.495 [INFO][4443] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.500752 containerd[1710]: 2026-03-11 01:25:00.499 [INFO][4384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:00.501753 containerd[1710]: time="2026-03-11T01:25:00.501726650Z" level=info msg="TearDown network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" successfully" Mar 11 01:25:00.501856 containerd[1710]: time="2026-03-11T01:25:00.501840490Z" level=info msg="StopPodSandbox for \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" returns successfully" Mar 11 01:25:00.513424 containerd[1710]: time="2026-03-11T01:25:00.512627847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bphv7,Uid:560bb52c-64f3-4776-995a-2c4cdca92a07,Namespace:kube-system,Attempt:1,}" Mar 11 01:25:00.683109 systemd-networkd[1335]: calib9d36652224: Link UP Mar 11 01:25:00.683407 systemd-networkd[1335]: calib9d36652224: Gained carrier Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.445 [ERROR][4452] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.467 [INFO][4452] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0 goldmane-9f7667bb8- calico-system df051263-45b2-4a8e-8bd1-b2a4cb98fb5f 881 0 2026-03-11 01:24:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c goldmane-9f7667bb8-6qt4g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9d36652224 [] [] }} ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.467 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.522 [INFO][4480] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" HandleID="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.537 [INFO][4480] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" HandleID="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"goldmane-9f7667bb8-6qt4g", "timestamp":"2026-03-11 01:25:00.522990444 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000255600)} Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.537 [INFO][4480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.537 [INFO][4480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.537 [INFO][4480] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.543 [INFO][4480] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.558 [INFO][4480] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.568 [INFO][4480] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.572 [INFO][4480] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.576 [INFO][4480] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.576 [INFO][4480] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.578 [INFO][4480] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648 Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.584 [INFO][4480] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.599 [INFO][4480] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.1/26] block=192.168.99.0/26 handle="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.599 [INFO][4480] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.1/26] handle="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.599 [INFO][4480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.717158 containerd[1710]: 2026-03-11 01:25:00.599 [INFO][4480] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.1/26] IPv6=[] ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" HandleID="k8s-pod-network.b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.605 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"goldmane-9f7667bb8-6qt4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9d36652224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.607 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.1/32] ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.607 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9d36652224 ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.687 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.688 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648", Pod:"goldmane-9f7667bb8-6qt4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9d36652224", MAC:"4a:20:bd:85:80:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.717794 containerd[1710]: 2026-03-11 01:25:00.713 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qt4g" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:00.775820 systemd-networkd[1335]: cali6bab1fc9348: Link UP Mar 11 01:25:00.780878 systemd-networkd[1335]: cali6bab1fc9348: Gained carrier Mar 11 01:25:00.818287 containerd[1710]: time="2026-03-11T01:25:00.818047611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:00.818287 containerd[1710]: time="2026-03-11T01:25:00.818109611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:00.818287 containerd[1710]: time="2026-03-11T01:25:00.818125891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:00.818287 containerd[1710]: time="2026-03-11T01:25:00.818200451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.497 [ERROR][4471] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.522 [INFO][4471] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0 calico-kube-controllers-567d7d4557- calico-system 08e128c7-b7fc-48ab-8314-d723695bfcf7 884 0 2026-03-11 01:24:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:567d7d4557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c calico-kube-controllers-567d7d4557-rm6ck eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6bab1fc9348 [] [] }} ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.522 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.582 [INFO][4490] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" HandleID="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.608 [INFO][4490] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" HandleID="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307f10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"calico-kube-controllers-567d7d4557-rm6ck", "timestamp":"2026-03-11 01:25:00.58213583 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400052c2c0)} Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.608 [INFO][4490] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.608 [INFO][4490] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.608 [INFO][4490] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.643 [INFO][4490] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.661 [INFO][4490] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.677 [INFO][4490] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.688 [INFO][4490] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.713 [INFO][4490] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.713 [INFO][4490] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.722 [INFO][4490] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9 Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.730 [INFO][4490] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.747 [INFO][4490] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.2/26] block=192.168.99.0/26 handle="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.750 [INFO][4490] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.2/26] handle="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.750 [INFO][4490] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.847907 containerd[1710]: 2026-03-11 01:25:00.750 [INFO][4490] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.2/26] IPv6=[] ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" HandleID="k8s-pod-network.0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.763 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0", GenerateName:"calico-kube-controllers-567d7d4557-", Namespace:"calico-system", SelfLink:"", UID:"08e128c7-b7fc-48ab-8314-d723695bfcf7", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d7d4557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"calico-kube-controllers-567d7d4557-rm6ck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bab1fc9348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.763 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.2/32] ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.763 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bab1fc9348 ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.793 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.796 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0", GenerateName:"calico-kube-controllers-567d7d4557-", Namespace:"calico-system", SelfLink:"", UID:"08e128c7-b7fc-48ab-8314-d723695bfcf7", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d7d4557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9", Pod:"calico-kube-controllers-567d7d4557-rm6ck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bab1fc9348", MAC:"ba:3e:a6:ae:e9:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.848498 containerd[1710]: 2026-03-11 01:25:00.835 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9" Namespace="calico-system" Pod="calico-kube-controllers-567d7d4557-rm6ck" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:00.861325 systemd[1]: run-netns-cni\x2d756d39f7\x2dd3e7\x2da05b\x2d5790\x2db28fbd6cdb97.mount: Deactivated successfully. Mar 11 01:25:00.861406 systemd[1]: run-netns-cni\x2dc05ee33b\x2daaef\x2d1050\x2df53a\x2da7ea74104669.mount: Deactivated successfully. Mar 11 01:25:00.862535 systemd[1]: run-netns-cni\x2df816cfcc\x2d0542\x2de9b6\x2d5f2e\x2d4456bb4a16be.mount: Deactivated successfully. Mar 11 01:25:00.862611 systemd[1]: run-netns-cni\x2de543eec2\x2d40f1\x2d3e28\x2d1006\x2d909844bfdf0a.mount: Deactivated successfully. Mar 11 01:25:00.862688 systemd[1]: run-netns-cni\x2d65f85496\x2d0630\x2d12df\x2d92a1\x2dd315b694adf0.mount: Deactivated successfully. Mar 11 01:25:00.882022 systemd-networkd[1335]: cali4dce783ffd0: Link UP Mar 11 01:25:00.882868 systemd-networkd[1335]: cali4dce783ffd0: Gained carrier Mar 11 01:25:00.921149 systemd[1]: Started cri-containerd-b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648.scope - libcontainer container b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648. Mar 11 01:25:00.933625 containerd[1710]: time="2026-03-11T01:25:00.933037103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:00.933625 containerd[1710]: time="2026-03-11T01:25:00.933097583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:00.933625 containerd[1710]: time="2026-03-11T01:25:00.933123663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:00.933625 containerd[1710]: time="2026-03-11T01:25:00.933205583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.582 [ERROR][4494] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.604 [INFO][4494] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0 calico-apiserver-5bffd5d454- calico-system cf40c38e-e681-42e6-9e14-7a8a99367a10 885 0 2026-03-11 01:24:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bffd5d454 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c calico-apiserver-5bffd5d454-899bq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4dce783ffd0 [] [] }} ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.606 [INFO][4494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.690 [INFO][4528] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" HandleID="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.715 [INFO][4528] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" HandleID="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbde0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"calico-apiserver-5bffd5d454-899bq", "timestamp":"2026-03-11 01:25:00.690932443 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186f20)} Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.715 [INFO][4528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.750 [INFO][4528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.750 [INFO][4528] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.762 [INFO][4528] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.774 [INFO][4528] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.785 [INFO][4528] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.793 [INFO][4528] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.796 [INFO][4528] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.797 [INFO][4528] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.836 [INFO][4528] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.856 [INFO][4528] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.868 [INFO][4528] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.3/26] block=192.168.99.0/26 handle="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.868 [INFO][4528] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.3/26] handle="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.868 [INFO][4528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:00.937803 containerd[1710]: 2026-03-11 01:25:00.869 [INFO][4528] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.3/26] IPv6=[] ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" HandleID="k8s-pod-network.f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.873 [INFO][4494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"cf40c38e-e681-42e6-9e14-7a8a99367a10", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"calico-apiserver-5bffd5d454-899bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4dce783ffd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.873 [INFO][4494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.3/32] ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.873 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dce783ffd0 ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.882 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.891 [INFO][4494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"cf40c38e-e681-42e6-9e14-7a8a99367a10", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb", Pod:"calico-apiserver-5bffd5d454-899bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4dce783ffd0", MAC:"a6:cb:bf:5c:b8:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:00.938300 containerd[1710]: 2026-03-11 01:25:00.924 [INFO][4494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-899bq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:01.004605 systemd[1]: Started cri-containerd-0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9.scope - libcontainer container 0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9. Mar 11 01:25:01.034257 systemd[1]: Removed slice kubepods-besteffort-pod464d6928_5de2_4f22_96ff_832817a85459.slice - libcontainer container kubepods-besteffort-pod464d6928_5de2_4f22_96ff_832817a85459.slice. Mar 11 01:25:01.047499 containerd[1710]: time="2026-03-11T01:25:01.047319314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:01.047982 containerd[1710]: time="2026-03-11T01:25:01.047498914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:01.047982 containerd[1710]: time="2026-03-11T01:25:01.047524714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.047982 containerd[1710]: time="2026-03-11T01:25:01.047867154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.078612 systemd[1]: Started cri-containerd-f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb.scope - libcontainer container f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb. Mar 11 01:25:01.088825 systemd-networkd[1335]: cali4baf5a675e2: Link UP Mar 11 01:25:01.091218 systemd-networkd[1335]: cali4baf5a675e2: Gained carrier Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.631 [ERROR][4511] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.658 [INFO][4511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0 coredns-7d764666f9- kube-system 19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c 886 0 2026-03-11 01:24:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c coredns-7d764666f9-bwhjs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4baf5a675e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.660 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.926 [INFO][4571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" HandleID="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.951 [INFO][4571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" HandleID="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000398250), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"coredns-7d764666f9-bwhjs", "timestamp":"2026-03-11 01:25:00.926795304 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000342000)} Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.951 [INFO][4571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.951 [INFO][4571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.951 [INFO][4571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.955 [INFO][4571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.964 [INFO][4571] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.970 [INFO][4571] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:00.991 [INFO][4571] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.008 [INFO][4571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.012 [INFO][4571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.022 [INFO][4571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.031 [INFO][4571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.4/26] block=192.168.99.0/26 handle="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.4/26] handle="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:01.152213 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.4/26] IPv6=[] ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" HandleID="k8s-pod-network.60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152863 containerd[1710]: 2026-03-11 01:25:01.063 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"coredns-7d764666f9-bwhjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4baf5a675e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.152863 containerd[1710]: 2026-03-11 01:25:01.064 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.4/32] ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152863 containerd[1710]: 2026-03-11 01:25:01.064 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4baf5a675e2 ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152863 containerd[1710]: 2026-03-11 01:25:01.097 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.152863 containerd[1710]: 2026-03-11 01:25:01.104 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa", Pod:"coredns-7d764666f9-bwhjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4baf5a675e2", MAC:"8e:d5:b8:f9:04:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.153050 containerd[1710]: 2026-03-11 01:25:01.140 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa" Namespace="kube-system" Pod="coredns-7d764666f9-bwhjs" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:01.183475 systemd[1]: Created slice kubepods-besteffort-podef00272e_f744_4493_8949_10e87e3ebbb5.slice - libcontainer container kubepods-besteffort-podef00272e_f744_4493_8949_10e87e3ebbb5.slice. Mar 11 01:25:01.205231 containerd[1710]: time="2026-03-11T01:25:01.205176075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-899bq,Uid:cf40c38e-e681-42e6-9e14-7a8a99367a10,Namespace:calico-system,Attempt:1,} returns sandbox id \"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb\"" Mar 11 01:25:01.209987 containerd[1710]: time="2026-03-11T01:25:01.209567514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:25:01.213250 containerd[1710]: time="2026-03-11T01:25:01.212680553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:01.213250 containerd[1710]: time="2026-03-11T01:25:01.212741193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:01.213250 containerd[1710]: time="2026-03-11T01:25:01.212764873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.213250 containerd[1710]: time="2026-03-11T01:25:01.213104433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.238825 systemd-networkd[1335]: cali7651ff4b6ec: Link UP Mar 11 01:25:01.242387 systemd-networkd[1335]: cali7651ff4b6ec: Gained carrier Mar 11 01:25:01.274639 systemd[1]: Started cri-containerd-60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa.scope - libcontainer container 60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa. Mar 11 01:25:01.278187 kubelet[3163]: I0311 01:25:01.277478 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef00272e-f744-4493-8949-10e87e3ebbb5-whisker-backend-key-pair\") pod \"whisker-78fc85d578-dsz2k\" (UID: \"ef00272e-f744-4493-8949-10e87e3ebbb5\") " pod="calico-system/whisker-78fc85d578-dsz2k" Mar 11 01:25:01.278187 kubelet[3163]: I0311 01:25:01.277534 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef00272e-f744-4493-8949-10e87e3ebbb5-nginx-config\") pod \"whisker-78fc85d578-dsz2k\" (UID: \"ef00272e-f744-4493-8949-10e87e3ebbb5\") " pod="calico-system/whisker-78fc85d578-dsz2k" Mar 11 01:25:01.278187 kubelet[3163]: I0311 01:25:01.277551 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4cqd\" (UniqueName: \"kubernetes.io/projected/ef00272e-f744-4493-8949-10e87e3ebbb5-kube-api-access-l4cqd\") pod \"whisker-78fc85d578-dsz2k\" (UID: \"ef00272e-f744-4493-8949-10e87e3ebbb5\") " pod="calico-system/whisker-78fc85d578-dsz2k" Mar 11 01:25:01.278187 kubelet[3163]: I0311 01:25:01.277567 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef00272e-f744-4493-8949-10e87e3ebbb5-whisker-ca-bundle\") pod \"whisker-78fc85d578-dsz2k\" (UID: \"ef00272e-f744-4493-8949-10e87e3ebbb5\") " pod="calico-system/whisker-78fc85d578-dsz2k" Mar 11 01:25:01.293339 containerd[1710]: time="2026-03-11T01:25:01.292590174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qt4g,Uid:df051263-45b2-4a8e-8bd1-b2a4cb98fb5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648\"" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:00.663 [ERROR][4516] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:00.747 [INFO][4516] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0 calico-apiserver-5bffd5d454- calico-system 17b44239-9f88-4e92-b7af-79e5bf77ec3d 887 0 2026-03-11 01:24:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bffd5d454 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c calico-apiserver-5bffd5d454-9txnp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7651ff4b6ec [] [] }} ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:00.747 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:00.971 [INFO][4613] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" HandleID="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.032 [INFO][4613] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" HandleID="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"calico-apiserver-5bffd5d454-9txnp", "timestamp":"2026-03-11 01:25:00.971916053 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000bedc0)} Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.032 [INFO][4613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.060 [INFO][4613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.060 [INFO][4613] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.084 [INFO][4613] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.122 [INFO][4613] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.161 [INFO][4613] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.169 [INFO][4613] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.175 [INFO][4613] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.178 [INFO][4613] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.180 [INFO][4613] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.199 [INFO][4613] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4613] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.5/26] block=192.168.99.0/26 handle="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4613] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.5/26] handle="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:01.309492 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4613] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.5/26] IPv6=[] ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" HandleID="k8s-pod-network.b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.227 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"17b44239-9f88-4e92-b7af-79e5bf77ec3d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"calico-apiserver-5bffd5d454-9txnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7651ff4b6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.227 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.5/32] ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.227 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7651ff4b6ec ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.244 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.269 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"17b44239-9f88-4e92-b7af-79e5bf77ec3d", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de", Pod:"calico-apiserver-5bffd5d454-9txnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7651ff4b6ec", MAC:"ca:12:cc:87:be:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.310013 containerd[1710]: 2026-03-11 01:25:01.305 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de" Namespace="calico-system" Pod="calico-apiserver-5bffd5d454-9txnp" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:01.319208 containerd[1710]: time="2026-03-11T01:25:01.318733847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567d7d4557-rm6ck,Uid:08e128c7-b7fc-48ab-8314-d723695bfcf7,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9\"" Mar 11 01:25:01.345890 systemd-networkd[1335]: cali39cdfc1cc86: Link UP Mar 11 01:25:01.352705 systemd-networkd[1335]: cali39cdfc1cc86: Gained carrier Mar 11 01:25:01.362496 containerd[1710]: time="2026-03-11T01:25:01.361824196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bwhjs,Uid:19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c,Namespace:kube-system,Attempt:1,} returns sandbox id \"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa\"" Mar 11 01:25:01.375944 containerd[1710]: time="2026-03-11T01:25:01.375904913Z" level=info msg="CreateContainer within sandbox \"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:25:01.377265 containerd[1710]: time="2026-03-11T01:25:01.376818553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:01.377265 containerd[1710]: time="2026-03-11T01:25:01.376870033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:01.377265 containerd[1710]: time="2026-03-11T01:25:01.376894433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.377265 containerd[1710]: time="2026-03-11T01:25:01.376980433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:00.796 [ERROR][4533] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:00.876 [INFO][4533] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0 coredns-7d764666f9- kube-system 560bb52c-64f3-4776-995a-2c4cdca92a07 888 0 2026-03-11 01:24:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c coredns-7d764666f9-bphv7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39cdfc1cc86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:00.876 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.000 [INFO][4692] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" HandleID="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4692] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" HandleID="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"coredns-7d764666f9-bphv7", "timestamp":"2026-03-11 01:25:01.000564726 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004094a0)} Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.057 [INFO][4692] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4692] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.221 [INFO][4692] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.225 [INFO][4692] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.236 [INFO][4692] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.251 [INFO][4692] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.265 [INFO][4692] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.276 [INFO][4692] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.276 [INFO][4692] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.280 [INFO][4692] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.299 [INFO][4692] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.326 [INFO][4692] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.6/26] block=192.168.99.0/26 handle="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.326 [INFO][4692] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.6/26] handle="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.326 [INFO][4692] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:01.396746 containerd[1710]: 2026-03-11 01:25:01.326 [INFO][4692] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.6/26] IPv6=[] ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" HandleID="k8s-pod-network.1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.397297 containerd[1710]: 2026-03-11 01:25:01.332 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"560bb52c-64f3-4776-995a-2c4cdca92a07", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"coredns-7d764666f9-bphv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39cdfc1cc86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.397297 containerd[1710]: 2026-03-11 01:25:01.332 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.6/32] ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.397297 containerd[1710]: 2026-03-11 01:25:01.332 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39cdfc1cc86 ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.397297 containerd[1710]: 2026-03-11 01:25:01.356 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.397297 containerd[1710]: 2026-03-11 01:25:01.358 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"560bb52c-64f3-4776-995a-2c4cdca92a07", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af", Pod:"coredns-7d764666f9-bphv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39cdfc1cc86", MAC:"d2:93:b6:2f:16:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.397489 containerd[1710]: 2026-03-11 01:25:01.390 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af" Namespace="kube-system" Pod="coredns-7d764666f9-bphv7" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:01.404735 systemd[1]: Started cri-containerd-b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de.scope - libcontainer container b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de. Mar 11 01:25:01.428621 containerd[1710]: time="2026-03-11T01:25:01.428579820Z" level=info msg="CreateContainer within sandbox \"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c56affd831adaab69787e668d420fb037eb4f6d70f86d6c6722509cc9a7041e\"" Mar 11 01:25:01.429865 containerd[1710]: time="2026-03-11T01:25:01.429823540Z" level=info msg="StartContainer for \"3c56affd831adaab69787e668d420fb037eb4f6d70f86d6c6722509cc9a7041e\"" Mar 11 01:25:01.452372 containerd[1710]: time="2026-03-11T01:25:01.452260774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:01.452686 containerd[1710]: time="2026-03-11T01:25:01.452396614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:01.452686 containerd[1710]: time="2026-03-11T01:25:01.452423974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.453478 containerd[1710]: time="2026-03-11T01:25:01.453033814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.474582 systemd[1]: Started cri-containerd-3c56affd831adaab69787e668d420fb037eb4f6d70f86d6c6722509cc9a7041e.scope - libcontainer container 3c56affd831adaab69787e668d420fb037eb4f6d70f86d6c6722509cc9a7041e. Mar 11 01:25:01.492646 systemd[1]: Started cri-containerd-1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af.scope - libcontainer container 1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af. Mar 11 01:25:01.500059 containerd[1710]: time="2026-03-11T01:25:01.498952242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fc85d578-dsz2k,Uid:ef00272e-f744-4493-8949-10e87e3ebbb5,Namespace:calico-system,Attempt:0,}" Mar 11 01:25:01.525949 containerd[1710]: time="2026-03-11T01:25:01.523201196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bffd5d454-9txnp,Uid:17b44239-9f88-4e92-b7af-79e5bf77ec3d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de\"" Mar 11 01:25:01.583612 containerd[1710]: time="2026-03-11T01:25:01.583544821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bphv7,Uid:560bb52c-64f3-4776-995a-2c4cdca92a07,Namespace:kube-system,Attempt:1,} returns sandbox id \"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af\"" Mar 11 01:25:01.584821 containerd[1710]: time="2026-03-11T01:25:01.583749261Z" level=info msg="StartContainer for \"3c56affd831adaab69787e668d420fb037eb4f6d70f86d6c6722509cc9a7041e\" returns successfully" Mar 11 01:25:01.595658 containerd[1710]: time="2026-03-11T01:25:01.595623018Z" level=info msg="CreateContainer within sandbox \"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:25:01.694150 containerd[1710]: time="2026-03-11T01:25:01.694057554Z" level=info msg="CreateContainer within sandbox \"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5d4c592593f648a201e3f3eb18128c375df440de44e7037edb2f660f6d9874d\"" Mar 11 01:25:01.697101 containerd[1710]: time="2026-03-11T01:25:01.695858834Z" level=info msg="StartContainer for \"b5d4c592593f648a201e3f3eb18128c375df440de44e7037edb2f660f6d9874d\"" Mar 11 01:25:01.754599 systemd[1]: Started cri-containerd-b5d4c592593f648a201e3f3eb18128c375df440de44e7037edb2f660f6d9874d.scope - libcontainer container b5d4c592593f648a201e3f3eb18128c375df440de44e7037edb2f660f6d9874d. Mar 11 01:25:01.819870 containerd[1710]: time="2026-03-11T01:25:01.819815443Z" level=info msg="StartContainer for \"b5d4c592593f648a201e3f3eb18128c375df440de44e7037edb2f660f6d9874d\" returns successfully" Mar 11 01:25:01.830251 kubelet[3163]: I0311 01:25:01.830209 3163 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="464d6928-5de2-4f22-96ff-832817a85459" path="/var/lib/kubelet/pods/464d6928-5de2-4f22-96ff-832817a85459/volumes" Mar 11 01:25:01.925725 systemd-networkd[1335]: cali4bcbf0fc6ca: Link UP Mar 11 01:25:01.927750 systemd-networkd[1335]: cali4bcbf0fc6ca: Gained carrier Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.762 [INFO][5016] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0 whisker-78fc85d578- calico-system ef00272e-f744-4493-8949-10e87e3ebbb5 918 0 2026-03-11 01:25:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78fc85d578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c whisker-78fc85d578-dsz2k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4bcbf0fc6ca [] [] }} ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.762 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.832 [INFO][5063] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" HandleID="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.843 [INFO][5063] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" HandleID="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"whisker-78fc85d578-dsz2k", "timestamp":"2026-03-11 01:25:01.832896 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000303a20)} Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.843 [INFO][5063] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.843 [INFO][5063] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.843 [INFO][5063] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.849 [INFO][5063] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.873 [INFO][5063] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.880 [INFO][5063] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.882 [INFO][5063] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.886 [INFO][5063] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.886 [INFO][5063] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.889 [INFO][5063] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.900 [INFO][5063] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.912 [INFO][5063] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.7/26] block=192.168.99.0/26 handle="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.912 [INFO][5063] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.7/26] handle="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.912 [INFO][5063] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:01.950619 containerd[1710]: 2026-03-11 01:25:01.912 [INFO][5063] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.7/26] IPv6=[] ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" HandleID="k8s-pod-network.b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.916 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0", GenerateName:"whisker-78fc85d578-", Namespace:"calico-system", SelfLink:"", UID:"ef00272e-f744-4493-8949-10e87e3ebbb5", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fc85d578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"whisker-78fc85d578-dsz2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4bcbf0fc6ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.917 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.7/32] ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.917 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bcbf0fc6ca ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.929 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.929 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0", GenerateName:"whisker-78fc85d578-", Namespace:"calico-system", SelfLink:"", UID:"ef00272e-f744-4493-8949-10e87e3ebbb5", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fc85d578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d", Pod:"whisker-78fc85d578-dsz2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4bcbf0fc6ca", MAC:"4a:ae:b1:5c:30:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:01.951255 containerd[1710]: 2026-03-11 01:25:01.944 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d" Namespace="calico-system" Pod="whisker-78fc85d578-dsz2k" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--78fc85d578--dsz2k-eth0" Mar 11 01:25:01.952559 kernel: calico-node[4577]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 11 01:25:01.971598 containerd[1710]: time="2026-03-11T01:25:01.971485205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:01.971598 containerd[1710]: time="2026-03-11T01:25:01.971561205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:01.971598 containerd[1710]: time="2026-03-11T01:25:01.971572525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.972256 containerd[1710]: time="2026-03-11T01:25:01.971940285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:01.986656 systemd-networkd[1335]: calib9d36652224: Gained IPv6LL Mar 11 01:25:02.040377 systemd[1]: Started cri-containerd-b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d.scope - libcontainer container b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d. Mar 11 01:25:02.091720 kubelet[3163]: I0311 01:25:02.085053 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bwhjs" podStartSLOduration=35.085032177 podStartE2EDuration="35.085032177s" podCreationTimestamp="2026-03-11 01:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:25:02.058680664 +0000 UTC m=+40.349982596" watchObservedRunningTime="2026-03-11 01:25:02.085032177 +0000 UTC m=+40.376334109" Mar 11 01:25:02.091720 kubelet[3163]: I0311 01:25:02.085360 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bphv7" podStartSLOduration=35.085355337 podStartE2EDuration="35.085355337s" podCreationTimestamp="2026-03-11 01:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:25:02.079474538 +0000 UTC m=+40.370776470" watchObservedRunningTime="2026-03-11 01:25:02.085355337 +0000 UTC m=+40.376657269" Mar 11 01:25:02.114617 systemd-networkd[1335]: cali6bab1fc9348: Gained IPv6LL Mar 11 01:25:02.168852 containerd[1710]: time="2026-03-11T01:25:02.168752316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fc85d578-dsz2k,Uid:ef00272e-f744-4493-8949-10e87e3ebbb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d\"" Mar 11 01:25:02.178605 systemd-networkd[1335]: cali4baf5a675e2: Gained IPv6LL Mar 11 01:25:02.242606 systemd-networkd[1335]: cali4dce783ffd0: Gained IPv6LL Mar 11 01:25:02.498646 systemd-networkd[1335]: cali7651ff4b6ec: Gained IPv6LL Mar 11 01:25:02.522393 systemd-networkd[1335]: vxlan.calico: Link UP Mar 11 01:25:02.522402 systemd-networkd[1335]: vxlan.calico: Gained carrier Mar 11 01:25:03.202709 systemd-networkd[1335]: cali39cdfc1cc86: Gained IPv6LL Mar 11 01:25:03.651043 systemd-networkd[1335]: cali4bcbf0fc6ca: Gained IPv6LL Mar 11 01:25:03.842665 systemd-networkd[1335]: vxlan.calico: Gained IPv6LL Mar 11 01:25:04.424833 containerd[1710]: time="2026-03-11T01:25:04.424783531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:04.428403 containerd[1710]: time="2026-03-11T01:25:04.428373531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 11 01:25:04.431749 containerd[1710]: time="2026-03-11T01:25:04.431697211Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:04.436885 containerd[1710]: time="2026-03-11T01:25:04.436835011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:04.437666 containerd[1710]: time="2026-03-11T01:25:04.437588291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.200314224s" Mar 11 01:25:04.437666 containerd[1710]: time="2026-03-11T01:25:04.437617571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 11 01:25:04.439766 containerd[1710]: time="2026-03-11T01:25:04.439733010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 01:25:04.448570 containerd[1710]: time="2026-03-11T01:25:04.448536690Z" level=info msg="CreateContainer within sandbox \"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:25:04.494269 containerd[1710]: time="2026-03-11T01:25:04.494151887Z" level=info msg="CreateContainer within sandbox \"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"54321d17fb482ca725d9842aadf1d473b698b46e3ffcb3efcb4ff0cbba3f32f7\"" Mar 11 01:25:04.494946 containerd[1710]: time="2026-03-11T01:25:04.494838247Z" level=info msg="StartContainer for \"54321d17fb482ca725d9842aadf1d473b698b46e3ffcb3efcb4ff0cbba3f32f7\"" Mar 11 01:25:04.527568 systemd[1]: Started cri-containerd-54321d17fb482ca725d9842aadf1d473b698b46e3ffcb3efcb4ff0cbba3f32f7.scope - libcontainer container 54321d17fb482ca725d9842aadf1d473b698b46e3ffcb3efcb4ff0cbba3f32f7. Mar 11 01:25:04.564223 containerd[1710]: time="2026-03-11T01:25:04.564184723Z" level=info msg="StartContainer for \"54321d17fb482ca725d9842aadf1d473b698b46e3ffcb3efcb4ff0cbba3f32f7\" returns successfully" Mar 11 01:25:06.075324 kubelet[3163]: I0311 01:25:06.074936 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:25:07.215316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001261252.mount: Deactivated successfully. Mar 11 01:25:07.600642 containerd[1710]: time="2026-03-11T01:25:07.600587860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:07.604134 containerd[1710]: time="2026-03-11T01:25:07.603962699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 11 01:25:07.608852 containerd[1710]: time="2026-03-11T01:25:07.607805939Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:07.613335 containerd[1710]: time="2026-03-11T01:25:07.613306338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:07.614143 containerd[1710]: time="2026-03-11T01:25:07.614112498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.174343168s" Mar 11 01:25:07.614214 containerd[1710]: time="2026-03-11T01:25:07.614144258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 11 01:25:07.616494 containerd[1710]: time="2026-03-11T01:25:07.616462458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 01:25:07.622972 containerd[1710]: time="2026-03-11T01:25:07.622932737Z" level=info msg="CreateContainer within sandbox \"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 01:25:07.664202 containerd[1710]: time="2026-03-11T01:25:07.664097971Z" level=info msg="CreateContainer within sandbox \"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b\"" Mar 11 01:25:07.665650 containerd[1710]: time="2026-03-11T01:25:07.665618051Z" level=info msg="StartContainer for \"1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b\"" Mar 11 01:25:07.695571 systemd[1]: Started cri-containerd-1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b.scope - libcontainer container 1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b. Mar 11 01:25:07.732129 containerd[1710]: time="2026-03-11T01:25:07.731794442Z" level=info msg="StartContainer for \"1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b\" returns successfully" Mar 11 01:25:08.101485 kubelet[3163]: I0311 01:25:08.100237 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bffd5d454-899bq" podStartSLOduration=25.870731136 podStartE2EDuration="29.100224552s" podCreationTimestamp="2026-03-11 01:24:39 +0000 UTC" firstStartedPulling="2026-03-11 01:25:01.209284594 +0000 UTC m=+39.500586526" lastFinishedPulling="2026-03-11 01:25:04.43877801 +0000 UTC m=+42.730079942" observedRunningTime="2026-03-11 01:25:05.112947358 +0000 UTC m=+43.404249290" watchObservedRunningTime="2026-03-11 01:25:08.100224552 +0000 UTC m=+46.391526484" Mar 11 01:25:10.786132 containerd[1710]: time="2026-03-11T01:25:10.786080787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:10.789510 containerd[1710]: time="2026-03-11T01:25:10.789456826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 11 01:25:10.793686 containerd[1710]: time="2026-03-11T01:25:10.793071546Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:10.798657 containerd[1710]: time="2026-03-11T01:25:10.798578665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:10.799206 containerd[1710]: time="2026-03-11T01:25:10.799173585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.181693368s" Mar 11 01:25:10.799206 containerd[1710]: time="2026-03-11T01:25:10.799205825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 11 01:25:10.819613 containerd[1710]: time="2026-03-11T01:25:10.819583982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:25:10.825046 containerd[1710]: time="2026-03-11T01:25:10.825012502Z" level=info msg="CreateContainer within sandbox \"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 01:25:10.873619 containerd[1710]: time="2026-03-11T01:25:10.873543815Z" level=info msg="CreateContainer within sandbox \"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983\"" Mar 11 01:25:10.878083 containerd[1710]: time="2026-03-11T01:25:10.875507215Z" level=info msg="StartContainer for \"8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983\"" Mar 11 01:25:10.941587 systemd[1]: Started cri-containerd-8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983.scope - libcontainer container 8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983. Mar 11 01:25:10.976788 containerd[1710]: time="2026-03-11T01:25:10.976667321Z" level=info msg="StartContainer for \"8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983\" returns successfully" Mar 11 01:25:11.110519 kubelet[3163]: I0311 01:25:11.110111 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-6qt4g" podStartSLOduration=25.793622617 podStartE2EDuration="32.110098583s" podCreationTimestamp="2026-03-11 01:24:39 +0000 UTC" firstStartedPulling="2026-03-11 01:25:01.299119012 +0000 UTC m=+39.590420944" lastFinishedPulling="2026-03-11 01:25:07.615594978 +0000 UTC m=+45.906896910" observedRunningTime="2026-03-11 01:25:08.103330871 +0000 UTC m=+46.394632803" watchObservedRunningTime="2026-03-11 01:25:11.110098583 +0000 UTC m=+49.401400475" Mar 11 01:25:11.138235 kubelet[3163]: I0311 01:25:11.137875 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-567d7d4557-rm6ck" podStartSLOduration=20.6597044 podStartE2EDuration="30.137862459s" podCreationTimestamp="2026-03-11 01:24:41 +0000 UTC" firstStartedPulling="2026-03-11 01:25:01.324381566 +0000 UTC m=+39.615683498" lastFinishedPulling="2026-03-11 01:25:10.802539625 +0000 UTC m=+49.093841557" observedRunningTime="2026-03-11 01:25:11.111001703 +0000 UTC m=+49.402303635" watchObservedRunningTime="2026-03-11 01:25:11.137862459 +0000 UTC m=+49.429164391" Mar 11 01:25:11.142405 containerd[1710]: time="2026-03-11T01:25:11.142365578Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:11.146258 containerd[1710]: time="2026-03-11T01:25:11.146224018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 01:25:11.150520 containerd[1710]: time="2026-03-11T01:25:11.150484457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 330.864155ms" Mar 11 01:25:11.150520 containerd[1710]: time="2026-03-11T01:25:11.150519657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 11 01:25:11.154035 containerd[1710]: time="2026-03-11T01:25:11.153538457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 01:25:11.160865 containerd[1710]: time="2026-03-11T01:25:11.160731176Z" level=info msg="CreateContainer within sandbox \"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:25:11.202376 containerd[1710]: time="2026-03-11T01:25:11.202333530Z" level=info msg="CreateContainer within sandbox \"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de4e6625a992c4ba3288f941f290f070a55627b8f49a94225f233387a0ddef0a\"" Mar 11 01:25:11.204089 containerd[1710]: time="2026-03-11T01:25:11.202962130Z" level=info msg="StartContainer for \"de4e6625a992c4ba3288f941f290f070a55627b8f49a94225f233387a0ddef0a\"" Mar 11 01:25:11.224566 systemd[1]: Started cri-containerd-de4e6625a992c4ba3288f941f290f070a55627b8f49a94225f233387a0ddef0a.scope - libcontainer container de4e6625a992c4ba3288f941f290f070a55627b8f49a94225f233387a0ddef0a. Mar 11 01:25:11.257850 containerd[1710]: time="2026-03-11T01:25:11.257797403Z" level=info msg="StartContainer for \"de4e6625a992c4ba3288f941f290f070a55627b8f49a94225f233387a0ddef0a\" returns successfully" Mar 11 01:25:12.113476 kubelet[3163]: I0311 01:25:12.113360 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bffd5d454-9txnp" podStartSLOduration=23.527991975 podStartE2EDuration="33.113344966s" podCreationTimestamp="2026-03-11 01:24:39 +0000 UTC" firstStartedPulling="2026-03-11 01:25:01.566051746 +0000 UTC m=+39.857353678" lastFinishedPulling="2026-03-11 01:25:11.151404737 +0000 UTC m=+49.442706669" observedRunningTime="2026-03-11 01:25:12.112497007 +0000 UTC m=+50.403798939" watchObservedRunningTime="2026-03-11 01:25:12.113344966 +0000 UTC m=+50.404646898" Mar 11 01:25:13.096523 kubelet[3163]: I0311 01:25:13.096486 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:25:13.793454 containerd[1710]: time="2026-03-11T01:25:13.793360250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:13.797227 containerd[1710]: time="2026-03-11T01:25:13.797197650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 11 01:25:13.801031 containerd[1710]: time="2026-03-11T01:25:13.801000009Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:13.807946 containerd[1710]: time="2026-03-11T01:25:13.807914247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:13.808675 containerd[1710]: time="2026-03-11T01:25:13.808650327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 2.65507483s" Mar 11 01:25:13.808713 containerd[1710]: time="2026-03-11T01:25:13.808680007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 11 01:25:13.818238 containerd[1710]: time="2026-03-11T01:25:13.818205604Z" level=info msg="CreateContainer within sandbox \"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 01:25:13.821295 containerd[1710]: time="2026-03-11T01:25:13.821255044Z" level=info msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" Mar 11 01:25:13.849092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604694664.mount: Deactivated successfully. Mar 11 01:25:13.861905 containerd[1710]: time="2026-03-11T01:25:13.861866594Z" level=info msg="CreateContainer within sandbox \"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168\"" Mar 11 01:25:13.863932 containerd[1710]: time="2026-03-11T01:25:13.863901754Z" level=info msg="StartContainer for \"d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168\"" Mar 11 01:25:13.914490 systemd[1]: run-containerd-runc-k8s.io-d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168-runc.65MsHm.mount: Deactivated successfully. Mar 11 01:25:13.925600 systemd[1]: Started cri-containerd-d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168.scope - libcontainer container d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168. Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.950 [INFO][5563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.951 [INFO][5563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" iface="eth0" netns="/var/run/netns/cni-00486f38-c9cf-4b30-ebf9-5d556888dae7" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.951 [INFO][5563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" iface="eth0" netns="/var/run/netns/cni-00486f38-c9cf-4b30-ebf9-5d556888dae7" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.951 [INFO][5563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" iface="eth0" netns="/var/run/netns/cni-00486f38-c9cf-4b30-ebf9-5d556888dae7" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.951 [INFO][5563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.951 [INFO][5563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.987 [INFO][5590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.989 [INFO][5590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:13.989 [INFO][5590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:14.001 [WARNING][5590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:14.001 [INFO][5590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:14.002 [INFO][5590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:14.008112 containerd[1710]: 2026-03-11 01:25:14.005 [INFO][5563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:14.008570 containerd[1710]: time="2026-03-11T01:25:14.008493759Z" level=info msg="TearDown network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" successfully" Mar 11 01:25:14.008570 containerd[1710]: time="2026-03-11T01:25:14.008533399Z" level=info msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" returns successfully" Mar 11 01:25:14.015856 containerd[1710]: time="2026-03-11T01:25:14.015696717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xdrq,Uid:37965836-ef4b-4e97-87a3-07d4d846b05a,Namespace:calico-system,Attempt:1,}" Mar 11 01:25:14.039731 containerd[1710]: time="2026-03-11T01:25:14.039685552Z" level=info msg="StartContainer for \"d56a55ec47ea33e339992e812c83654f290d0c7efb5a1975659da90ef9257168\" returns successfully" Mar 11 01:25:14.042193 containerd[1710]: time="2026-03-11T01:25:14.042159351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 01:25:14.164779 systemd-networkd[1335]: cali27766d023c1: Link UP Mar 11 01:25:14.167536 systemd-networkd[1335]: cali27766d023c1: Gained carrier Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.090 [INFO][5614] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0 csi-node-driver- calico-system 37965836-ef4b-4e97-87a3-07d4d846b05a 1011 0 2026-03-11 01:24:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-541af3988c csi-node-driver-4xdrq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali27766d023c1 [] [] }} ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.090 [INFO][5614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.116 [INFO][5626] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" HandleID="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.128 [INFO][5626] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" HandleID="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edda0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-541af3988c", "pod":"csi-node-driver-4xdrq", "timestamp":"2026-03-11 01:25:14.116247013 +0000 UTC"}, Hostname:"ci-4081.3.6-n-541af3988c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038e000)} Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.128 [INFO][5626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.128 [INFO][5626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.128 [INFO][5626] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-541af3988c' Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.130 [INFO][5626] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.135 [INFO][5626] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.139 [INFO][5626] ipam/ipam.go 526: Trying affinity for 192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.141 [INFO][5626] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.143 [INFO][5626] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.143 [INFO][5626] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.145 [INFO][5626] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74 Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.150 [INFO][5626] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.159 [INFO][5626] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.8/26] block=192.168.99.0/26 handle="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.159 [INFO][5626] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.8/26] handle="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" host="ci-4081.3.6-n-541af3988c" Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.159 [INFO][5626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:14.194667 containerd[1710]: 2026-03-11 01:25:14.159 [INFO][5626] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.8/26] IPv6=[] ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" HandleID="k8s-pod-network.67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.162 [INFO][5614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37965836-ef4b-4e97-87a3-07d4d846b05a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"", Pod:"csi-node-driver-4xdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27766d023c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.162 [INFO][5614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.8/32] ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.162 [INFO][5614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27766d023c1 ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.169 [INFO][5614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.173 [INFO][5614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37965836-ef4b-4e97-87a3-07d4d846b05a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74", Pod:"csi-node-driver-4xdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27766d023c1", MAC:"d6:ca:4e:ef:73:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:14.196864 containerd[1710]: 2026-03-11 01:25:14.191 [INFO][5614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74" Namespace="calico-system" Pod="csi-node-driver-4xdrq" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:14.219768 containerd[1710]: time="2026-03-11T01:25:14.219501709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:25:14.219768 containerd[1710]: time="2026-03-11T01:25:14.219581149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:25:14.219768 containerd[1710]: time="2026-03-11T01:25:14.219592909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:14.220004 containerd[1710]: time="2026-03-11T01:25:14.219788308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:25:14.235593 systemd[1]: Started cri-containerd-67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74.scope - libcontainer container 67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74. Mar 11 01:25:14.254713 containerd[1710]: time="2026-03-11T01:25:14.254588820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xdrq,Uid:37965836-ef4b-4e97-87a3-07d4d846b05a,Namespace:calico-system,Attempt:1,} returns sandbox id \"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74\"" Mar 11 01:25:14.798603 kubelet[3163]: I0311 01:25:14.798472 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:25:14.843190 systemd[1]: run-netns-cni\x2d00486f38\x2dc9cf\x2d4b30\x2debf9\x2d5d556888dae7.mount: Deactivated successfully. Mar 11 01:25:15.763420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438766723.mount: Deactivated successfully. Mar 11 01:25:15.810537 systemd-networkd[1335]: cali27766d023c1: Gained IPv6LL Mar 11 01:25:16.163464 containerd[1710]: time="2026-03-11T01:25:16.163050524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.064508 containerd[1710]: time="2026-03-11T01:25:17.064464148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 11 01:25:17.068554 containerd[1710]: time="2026-03-11T01:25:17.068493507Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.073506 containerd[1710]: time="2026-03-11T01:25:17.073447146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:17.074324 containerd[1710]: time="2026-03-11T01:25:17.074210506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 3.032011675s" Mar 11 01:25:17.074324 containerd[1710]: time="2026-03-11T01:25:17.074241946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 11 01:25:17.076839 containerd[1710]: time="2026-03-11T01:25:17.075590466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 01:25:17.086555 containerd[1710]: time="2026-03-11T01:25:17.086526343Z" level=info msg="CreateContainer within sandbox \"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 01:25:17.124293 containerd[1710]: time="2026-03-11T01:25:17.124184694Z" level=info msg="CreateContainer within sandbox \"b18ee5b120013657b5f31cc5b93882a40635363af561cd3ca65afd71d24b6c6d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"99f1b23a16898415df6ac3be3480fc13dc38d6f29a8257e079f81b3c507d2681\"" Mar 11 01:25:17.126077 containerd[1710]: time="2026-03-11T01:25:17.124990854Z" level=info msg="StartContainer for \"99f1b23a16898415df6ac3be3480fc13dc38d6f29a8257e079f81b3c507d2681\"" Mar 11 01:25:17.168666 systemd[1]: Started cri-containerd-99f1b23a16898415df6ac3be3480fc13dc38d6f29a8257e079f81b3c507d2681.scope - libcontainer container 99f1b23a16898415df6ac3be3480fc13dc38d6f29a8257e079f81b3c507d2681. Mar 11 01:25:17.206056 containerd[1710]: time="2026-03-11T01:25:17.205939075Z" level=info msg="StartContainer for \"99f1b23a16898415df6ac3be3480fc13dc38d6f29a8257e079f81b3c507d2681\" returns successfully" Mar 11 01:25:18.130880 kubelet[3163]: I0311 01:25:18.130812 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-78fc85d578-dsz2k" podStartSLOduration=2.227373263 podStartE2EDuration="17.130798133s" podCreationTimestamp="2026-03-11 01:25:01 +0000 UTC" firstStartedPulling="2026-03-11 01:25:02.171755716 +0000 UTC m=+40.463057608" lastFinishedPulling="2026-03-11 01:25:17.075180546 +0000 UTC m=+55.366482478" observedRunningTime="2026-03-11 01:25:18.130095294 +0000 UTC m=+56.421397266" watchObservedRunningTime="2026-03-11 01:25:18.130798133 +0000 UTC m=+56.422100185" Mar 11 01:25:18.666216 containerd[1710]: time="2026-03-11T01:25:18.666164126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:18.669815 containerd[1710]: time="2026-03-11T01:25:18.669665005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 11 01:25:18.673412 containerd[1710]: time="2026-03-11T01:25:18.673122044Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:18.678215 containerd[1710]: time="2026-03-11T01:25:18.678176883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:18.678984 containerd[1710]: time="2026-03-11T01:25:18.678956402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.603334576s" Mar 11 01:25:18.679079 containerd[1710]: time="2026-03-11T01:25:18.679063882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 11 01:25:18.687006 containerd[1710]: time="2026-03-11T01:25:18.686971401Z" level=info msg="CreateContainer within sandbox \"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 01:25:18.749270 containerd[1710]: time="2026-03-11T01:25:18.749229346Z" level=info msg="CreateContainer within sandbox \"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f2c52514d9f88e982066184e8e3c9eb8f1233b3958351a48361300be65bd7ce0\"" Mar 11 01:25:18.751111 containerd[1710]: time="2026-03-11T01:25:18.749726906Z" level=info msg="StartContainer for \"f2c52514d9f88e982066184e8e3c9eb8f1233b3958351a48361300be65bd7ce0\"" Mar 11 01:25:18.783578 systemd[1]: Started cri-containerd-f2c52514d9f88e982066184e8e3c9eb8f1233b3958351a48361300be65bd7ce0.scope - libcontainer container f2c52514d9f88e982066184e8e3c9eb8f1233b3958351a48361300be65bd7ce0. Mar 11 01:25:18.812782 containerd[1710]: time="2026-03-11T01:25:18.812744290Z" level=info msg="StartContainer for \"f2c52514d9f88e982066184e8e3c9eb8f1233b3958351a48361300be65bd7ce0\" returns successfully" Mar 11 01:25:18.815587 containerd[1710]: time="2026-03-11T01:25:18.814651890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 01:25:20.648574 containerd[1710]: time="2026-03-11T01:25:20.647811012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:20.651203 containerd[1710]: time="2026-03-11T01:25:20.651173411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 11 01:25:20.655587 containerd[1710]: time="2026-03-11T01:25:20.655323730Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:20.663982 containerd[1710]: time="2026-03-11T01:25:20.663951248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:25:20.664918 containerd[1710]: time="2026-03-11T01:25:20.664888568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.850205558s" Mar 11 01:25:20.664987 containerd[1710]: time="2026-03-11T01:25:20.664919128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 11 01:25:20.675045 containerd[1710]: time="2026-03-11T01:25:20.675014965Z" level=info msg="CreateContainer within sandbox \"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 01:25:20.717279 containerd[1710]: time="2026-03-11T01:25:20.717139475Z" level=info msg="CreateContainer within sandbox \"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"59dd77f8368de839e88c2bc8f012553cf849d30b31f502e3747aa605187647a9\"" Mar 11 01:25:20.718090 containerd[1710]: time="2026-03-11T01:25:20.718038035Z" level=info msg="StartContainer for \"59dd77f8368de839e88c2bc8f012553cf849d30b31f502e3747aa605187647a9\"" Mar 11 01:25:20.747581 systemd[1]: Started cri-containerd-59dd77f8368de839e88c2bc8f012553cf849d30b31f502e3747aa605187647a9.scope - libcontainer container 59dd77f8368de839e88c2bc8f012553cf849d30b31f502e3747aa605187647a9. Mar 11 01:25:20.779848 containerd[1710]: time="2026-03-11T01:25:20.779803500Z" level=info msg="StartContainer for \"59dd77f8368de839e88c2bc8f012553cf849d30b31f502e3747aa605187647a9\" returns successfully" Mar 11 01:25:20.962268 kubelet[3163]: I0311 01:25:20.962169 3163 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 01:25:20.962268 kubelet[3163]: I0311 01:25:20.962203 3163 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 01:25:21.140522 kubelet[3163]: I0311 01:25:21.140357 3163 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-4xdrq" podStartSLOduration=33.731080153 podStartE2EDuration="40.14033514s" podCreationTimestamp="2026-03-11 01:24:41 +0000 UTC" firstStartedPulling="2026-03-11 01:25:14.25667138 +0000 UTC m=+52.547973272" lastFinishedPulling="2026-03-11 01:25:20.665926327 +0000 UTC m=+58.957228259" observedRunningTime="2026-03-11 01:25:21.140270741 +0000 UTC m=+59.431572673" watchObservedRunningTime="2026-03-11 01:25:21.14033514 +0000 UTC m=+59.431637072" Mar 11 01:25:21.836739 containerd[1710]: time="2026-03-11T01:25:21.836698978Z" level=info msg="StopPodSandbox for \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\"" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.875 [WARNING][5842] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"560bb52c-64f3-4776-995a-2c4cdca92a07", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af", Pod:"coredns-7d764666f9-bphv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39cdfc1cc86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.875 [INFO][5842] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.875 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" iface="eth0" netns="" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.875 [INFO][5842] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.875 [INFO][5842] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.899 [INFO][5850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.900 [INFO][5850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.900 [INFO][5850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.911 [WARNING][5850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.911 [INFO][5850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.913 [INFO][5850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:21.919009 containerd[1710]: 2026-03-11 01:25:21.915 [INFO][5842] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.919463 containerd[1710]: time="2026-03-11T01:25:21.919042793Z" level=info msg="TearDown network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" successfully" Mar 11 01:25:21.919463 containerd[1710]: time="2026-03-11T01:25:21.919067953Z" level=info msg="StopPodSandbox for \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" returns successfully" Mar 11 01:25:21.921971 containerd[1710]: time="2026-03-11T01:25:21.921816150Z" level=info msg="RemovePodSandbox for \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\"" Mar 11 01:25:21.923882 containerd[1710]: time="2026-03-11T01:25:21.923603548Z" level=info msg="Forcibly stopping sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\"" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.960 [WARNING][5865] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"560bb52c-64f3-4776-995a-2c4cdca92a07", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"1b3242fdef20ce347e7495cc6a673f8910a1143ab19d91cdb90fc7b48fd3e6af", Pod:"coredns-7d764666f9-bphv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39cdfc1cc86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.960 [INFO][5865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.960 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" iface="eth0" netns="" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.960 [INFO][5865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.960 [INFO][5865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.979 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.979 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.979 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.988 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.988 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" HandleID="k8s-pod-network.dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bphv7-eth0" Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.989 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:21.993563 containerd[1710]: 2026-03-11 01:25:21.991 [INFO][5865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef" Mar 11 01:25:21.993996 containerd[1710]: time="2026-03-11T01:25:21.993603219Z" level=info msg="TearDown network for sandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" successfully" Mar 11 01:25:22.008398 containerd[1710]: time="2026-03-11T01:25:22.008349000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.008539 containerd[1710]: time="2026-03-11T01:25:22.008452400Z" level=info msg="RemovePodSandbox \"dffe0e1dfd7f803870a8bcbedd1bc91b3cbadb20920456ff4b7376135c9751ef\" returns successfully" Mar 11 01:25:22.009082 containerd[1710]: time="2026-03-11T01:25:22.009052679Z" level=info msg="StopPodSandbox for \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\"" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.041 [WARNING][5886] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.041 [INFO][5886] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.041 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" iface="eth0" netns="" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.041 [INFO][5886] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.041 [INFO][5886] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.060 [INFO][5893] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.060 [INFO][5893] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.060 [INFO][5893] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.068 [WARNING][5893] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.068 [INFO][5893] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.070 [INFO][5893] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.073663 containerd[1710]: 2026-03-11 01:25:22.071 [INFO][5886] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.074011 containerd[1710]: time="2026-03-11T01:25:22.073702197Z" level=info msg="TearDown network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" successfully" Mar 11 01:25:22.074011 containerd[1710]: time="2026-03-11T01:25:22.073727677Z" level=info msg="StopPodSandbox for \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" returns successfully" Mar 11 01:25:22.074268 containerd[1710]: time="2026-03-11T01:25:22.074193757Z" level=info msg="RemovePodSandbox for \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\"" Mar 11 01:25:22.074299 containerd[1710]: time="2026-03-11T01:25:22.074278477Z" level=info msg="Forcibly stopping sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\"" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.107 [WARNING][5906] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" WorkloadEndpoint="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.107 [INFO][5906] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.107 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" iface="eth0" netns="" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.107 [INFO][5906] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.107 [INFO][5906] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.128 [INFO][5913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.129 [INFO][5913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.129 [INFO][5913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.138 [WARNING][5913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.138 [INFO][5913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" HandleID="k8s-pod-network.9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Workload="ci--4081.3.6--n--541af3988c-k8s-whisker--69548b5b65--pfgf4-eth0" Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.140 [INFO][5913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.143527 containerd[1710]: 2026-03-11 01:25:22.142 [INFO][5906] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2" Mar 11 01:25:22.143527 containerd[1710]: time="2026-03-11T01:25:22.143409269Z" level=info msg="TearDown network for sandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" successfully" Mar 11 01:25:22.152989 containerd[1710]: time="2026-03-11T01:25:22.152945657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.153097 containerd[1710]: time="2026-03-11T01:25:22.153044497Z" level=info msg="RemovePodSandbox \"9136dcf4ed9cf056a0ca8eb301589134fa94c0113183c1bfaa540f011d13bea2\" returns successfully" Mar 11 01:25:22.153509 containerd[1710]: time="2026-03-11T01:25:22.153484256Z" level=info msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.187 [WARNING][5927] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37965836-ef4b-4e97-87a3-07d4d846b05a", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74", Pod:"csi-node-driver-4xdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27766d023c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.190 [INFO][5927] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.190 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" iface="eth0" netns="" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.190 [INFO][5927] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.190 [INFO][5927] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.208 [INFO][5934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.208 [INFO][5934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.208 [INFO][5934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.218 [WARNING][5934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.218 [INFO][5934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.219 [INFO][5934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.223608 containerd[1710]: 2026-03-11 01:25:22.221 [INFO][5927] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.224724 containerd[1710]: time="2026-03-11T01:25:22.223649167Z" level=info msg="TearDown network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" successfully" Mar 11 01:25:22.224724 containerd[1710]: time="2026-03-11T01:25:22.223680367Z" level=info msg="StopPodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" returns successfully" Mar 11 01:25:22.224724 containerd[1710]: time="2026-03-11T01:25:22.224204927Z" level=info msg="RemovePodSandbox for \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" Mar 11 01:25:22.224724 containerd[1710]: time="2026-03-11T01:25:22.224232247Z" level=info msg="Forcibly stopping sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\"" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.255 [WARNING][5948] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37965836-ef4b-4e97-87a3-07d4d846b05a", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"67652de870866180cd98a872a2d151939d707299abb67abd35c19f42fbd4dc74", Pod:"csi-node-driver-4xdrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27766d023c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.256 [INFO][5948] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.256 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" iface="eth0" netns="" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.256 [INFO][5948] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.256 [INFO][5948] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.276 [INFO][5955] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.276 [INFO][5955] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.276 [INFO][5955] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.285 [WARNING][5955] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.285 [INFO][5955] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" HandleID="k8s-pod-network.47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Workload="ci--4081.3.6--n--541af3988c-k8s-csi--node--driver--4xdrq-eth0" Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.287 [INFO][5955] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.289953 containerd[1710]: 2026-03-11 01:25:22.288 [INFO][5948] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8" Mar 11 01:25:22.291091 containerd[1710]: time="2026-03-11T01:25:22.290403443Z" level=info msg="TearDown network for sandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" successfully" Mar 11 01:25:22.299936 containerd[1710]: time="2026-03-11T01:25:22.299895951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.300088 containerd[1710]: time="2026-03-11T01:25:22.299972551Z" level=info msg="RemovePodSandbox \"47d0c37971568cb9e4646d8e6c447411b382645ffc58c53507c8303f106f4df8\" returns successfully" Mar 11 01:25:22.300714 containerd[1710]: time="2026-03-11T01:25:22.300396390Z" level=info msg="StopPodSandbox for \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\"" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.333 [WARNING][5970] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0", GenerateName:"calico-kube-controllers-567d7d4557-", Namespace:"calico-system", SelfLink:"", UID:"08e128c7-b7fc-48ab-8314-d723695bfcf7", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d7d4557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9", Pod:"calico-kube-controllers-567d7d4557-rm6ck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bab1fc9348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.335 [INFO][5970] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.335 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" iface="eth0" netns="" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.335 [INFO][5970] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.335 [INFO][5970] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.353 [INFO][5977] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.353 [INFO][5977] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.353 [INFO][5977] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.363 [WARNING][5977] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.363 [INFO][5977] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.364 [INFO][5977] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.368261 containerd[1710]: 2026-03-11 01:25:22.366 [INFO][5970] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.368862 containerd[1710]: time="2026-03-11T01:25:22.368501144Z" level=info msg="TearDown network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" successfully" Mar 11 01:25:22.368862 containerd[1710]: time="2026-03-11T01:25:22.368527984Z" level=info msg="StopPodSandbox for \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" returns successfully" Mar 11 01:25:22.369454 containerd[1710]: time="2026-03-11T01:25:22.369339063Z" level=info msg="RemovePodSandbox for \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\"" Mar 11 01:25:22.369454 containerd[1710]: time="2026-03-11T01:25:22.369366823Z" level=info msg="Forcibly stopping sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\"" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.402 [WARNING][5991] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0", GenerateName:"calico-kube-controllers-567d7d4557-", Namespace:"calico-system", SelfLink:"", UID:"08e128c7-b7fc-48ab-8314-d723695bfcf7", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567d7d4557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"0d570ffcacec2017decf2155d29f337162c23069f27648129bae46892193f1d9", Pod:"calico-kube-controllers-567d7d4557-rm6ck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bab1fc9348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.402 [INFO][5991] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.402 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" iface="eth0" netns="" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.402 [INFO][5991] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.402 [INFO][5991] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.420 [INFO][5998] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.420 [INFO][5998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.420 [INFO][5998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.429 [WARNING][5998] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.429 [INFO][5998] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" HandleID="k8s-pod-network.d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--kube--controllers--567d7d4557--rm6ck-eth0" Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.430 [INFO][5998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.434307 containerd[1710]: 2026-03-11 01:25:22.432 [INFO][5991] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6" Mar 11 01:25:22.434307 containerd[1710]: time="2026-03-11T01:25:22.434288100Z" level=info msg="TearDown network for sandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" successfully" Mar 11 01:25:22.443130 containerd[1710]: time="2026-03-11T01:25:22.443093689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.443236 containerd[1710]: time="2026-03-11T01:25:22.443158649Z" level=info msg="RemovePodSandbox \"d8619b349f63c19140b0de6ede8318fb42742700807e50f1e83ebbd71ea9eee6\" returns successfully" Mar 11 01:25:22.443939 containerd[1710]: time="2026-03-11T01:25:22.443683928Z" level=info msg="StopPodSandbox for \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\"" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.474 [WARNING][6012] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648", Pod:"goldmane-9f7667bb8-6qt4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9d36652224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.475 [INFO][6012] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.475 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" iface="eth0" netns="" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.475 [INFO][6012] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.475 [INFO][6012] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.493 [INFO][6019] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.493 [INFO][6019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.493 [INFO][6019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.503 [WARNING][6019] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.503 [INFO][6019] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.504 [INFO][6019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.508911 containerd[1710]: 2026-03-11 01:25:22.506 [INFO][6012] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.510029 containerd[1710]: time="2026-03-11T01:25:22.508962846Z" level=info msg="TearDown network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" successfully" Mar 11 01:25:22.510029 containerd[1710]: time="2026-03-11T01:25:22.508987806Z" level=info msg="StopPodSandbox for \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" returns successfully" Mar 11 01:25:22.510029 containerd[1710]: time="2026-03-11T01:25:22.509460125Z" level=info msg="RemovePodSandbox for \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\"" Mar 11 01:25:22.510029 containerd[1710]: time="2026-03-11T01:25:22.509489165Z" level=info msg="Forcibly stopping sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\"" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.557 [WARNING][6033] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"df051263-45b2-4a8e-8bd1-b2a4cb98fb5f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b787c792ae8a693aa20889c903327c9fc093c9f6924290d929deda25556df648", Pod:"goldmane-9f7667bb8-6qt4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9d36652224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.558 [INFO][6033] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.558 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" iface="eth0" netns="" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.558 [INFO][6033] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.558 [INFO][6033] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.576 [INFO][6040] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.576 [INFO][6040] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.576 [INFO][6040] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.589 [WARNING][6040] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.589 [INFO][6040] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" HandleID="k8s-pod-network.6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Workload="ci--4081.3.6--n--541af3988c-k8s-goldmane--9f7667bb8--6qt4g-eth0" Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.590 [INFO][6040] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.594299 containerd[1710]: 2026-03-11 01:25:22.592 [INFO][6033] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0" Mar 11 01:25:22.594839 containerd[1710]: time="2026-03-11T01:25:22.594314858Z" level=info msg="TearDown network for sandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" successfully" Mar 11 01:25:22.602768 containerd[1710]: time="2026-03-11T01:25:22.602718447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.602869 containerd[1710]: time="2026-03-11T01:25:22.602805567Z" level=info msg="RemovePodSandbox \"6b68df6f53b21acdeb7373ca64575ce7ea43fd49ee3dc12fe954dfbb8a2c27e0\" returns successfully" Mar 11 01:25:22.603561 containerd[1710]: time="2026-03-11T01:25:22.603322686Z" level=info msg="StopPodSandbox for \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\"" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.637 [WARNING][6054] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"cf40c38e-e681-42e6-9e14-7a8a99367a10", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb", Pod:"calico-apiserver-5bffd5d454-899bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4dce783ffd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.637 [INFO][6054] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.637 [INFO][6054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" iface="eth0" netns="" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.637 [INFO][6054] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.637 [INFO][6054] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.660 [INFO][6062] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.660 [INFO][6062] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.660 [INFO][6062] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.668 [WARNING][6062] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.668 [INFO][6062] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.670 [INFO][6062] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.673975 containerd[1710]: 2026-03-11 01:25:22.672 [INFO][6054] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.673975 containerd[1710]: time="2026-03-11T01:25:22.673858077Z" level=info msg="TearDown network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" successfully" Mar 11 01:25:22.673975 containerd[1710]: time="2026-03-11T01:25:22.673886037Z" level=info msg="StopPodSandbox for \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" returns successfully" Mar 11 01:25:22.675160 containerd[1710]: time="2026-03-11T01:25:22.675129875Z" level=info msg="RemovePodSandbox for \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\"" Mar 11 01:25:22.675219 containerd[1710]: time="2026-03-11T01:25:22.675166915Z" level=info msg="Forcibly stopping sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\"" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.708 [WARNING][6076] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"cf40c38e-e681-42e6-9e14-7a8a99367a10", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"f00c51fd27601075b9053083885a04876755bc2d4718b8e357c3224f11ce14bb", Pod:"calico-apiserver-5bffd5d454-899bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4dce783ffd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.709 [INFO][6076] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.709 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" iface="eth0" netns="" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.709 [INFO][6076] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.709 [INFO][6076] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.727 [INFO][6083] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.727 [INFO][6083] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.727 [INFO][6083] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.736 [WARNING][6083] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.736 [INFO][6083] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" HandleID="k8s-pod-network.74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--899bq-eth0" Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.737 [INFO][6083] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.741304 containerd[1710]: 2026-03-11 01:25:22.739 [INFO][6076] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7" Mar 11 01:25:22.741707 containerd[1710]: time="2026-03-11T01:25:22.741293471Z" level=info msg="TearDown network for sandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" successfully" Mar 11 01:25:22.750443 containerd[1710]: time="2026-03-11T01:25:22.750402820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.750544 containerd[1710]: time="2026-03-11T01:25:22.750499340Z" level=info msg="RemovePodSandbox \"74ce035da6c8d78fe7bf876fce21fe111085f61508310d11ed4970fa0f064fa7\" returns successfully" Mar 11 01:25:22.750983 containerd[1710]: time="2026-03-11T01:25:22.750957459Z" level=info msg="StopPodSandbox for \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\"" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.785 [WARNING][6097] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"17b44239-9f88-4e92-b7af-79e5bf77ec3d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de", Pod:"calico-apiserver-5bffd5d454-9txnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7651ff4b6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.785 [INFO][6097] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.785 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" iface="eth0" netns="" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.785 [INFO][6097] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.785 [INFO][6097] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.805 [INFO][6105] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.805 [INFO][6105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.805 [INFO][6105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.816 [WARNING][6105] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.816 [INFO][6105] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.818 [INFO][6105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.821626 containerd[1710]: 2026-03-11 01:25:22.820 [INFO][6097] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.822190 containerd[1710]: time="2026-03-11T01:25:22.821680249Z" level=info msg="TearDown network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" successfully" Mar 11 01:25:22.822190 containerd[1710]: time="2026-03-11T01:25:22.821707809Z" level=info msg="StopPodSandbox for \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" returns successfully" Mar 11 01:25:22.822190 containerd[1710]: time="2026-03-11T01:25:22.822124889Z" level=info msg="RemovePodSandbox for \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\"" Mar 11 01:25:22.822190 containerd[1710]: time="2026-03-11T01:25:22.822152049Z" level=info msg="Forcibly stopping sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\"" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.858 [WARNING][6119] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0", GenerateName:"calico-apiserver-5bffd5d454-", Namespace:"calico-system", SelfLink:"", UID:"17b44239-9f88-4e92-b7af-79e5bf77ec3d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bffd5d454", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"b803a9764502b265087951e6aa3b3e90cb8e51624a05ae0a14bdb044f3f384de", Pod:"calico-apiserver-5bffd5d454-9txnp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7651ff4b6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.858 [INFO][6119] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.858 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" iface="eth0" netns="" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.858 [INFO][6119] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.858 [INFO][6119] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.881 [INFO][6126] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.881 [INFO][6126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.881 [INFO][6126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.891 [WARNING][6126] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.891 [INFO][6126] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" HandleID="k8s-pod-network.ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Workload="ci--4081.3.6--n--541af3988c-k8s-calico--apiserver--5bffd5d454--9txnp-eth0" Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.893 [INFO][6126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.899274 containerd[1710]: 2026-03-11 01:25:22.896 [INFO][6119] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45" Mar 11 01:25:22.899922 containerd[1710]: time="2026-03-11T01:25:22.899309591Z" level=info msg="TearDown network for sandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" successfully" Mar 11 01:25:22.908577 containerd[1710]: time="2026-03-11T01:25:22.908532259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:22.908696 containerd[1710]: time="2026-03-11T01:25:22.908612699Z" level=info msg="RemovePodSandbox \"ccc20ff8c5f78692baa7d43e9c3eb3b08503a59dc39f85904a2a6e0e79160a45\" returns successfully" Mar 11 01:25:22.909916 containerd[1710]: time="2026-03-11T01:25:22.909660018Z" level=info msg="StopPodSandbox for \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\"" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.945 [WARNING][6149] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa", Pod:"coredns-7d764666f9-bwhjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4baf5a675e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.946 [INFO][6149] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.946 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" iface="eth0" netns="" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.946 [INFO][6149] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.946 [INFO][6149] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.964 [INFO][6157] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.964 [INFO][6157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.965 [INFO][6157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.973 [WARNING][6157] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.974 [INFO][6157] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.975 [INFO][6157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:22.979077 containerd[1710]: 2026-03-11 01:25:22.977 [INFO][6149] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:22.979077 containerd[1710]: time="2026-03-11T01:25:22.978959330Z" level=info msg="TearDown network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" successfully" Mar 11 01:25:22.979077 containerd[1710]: time="2026-03-11T01:25:22.978983850Z" level=info msg="StopPodSandbox for \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" returns successfully" Mar 11 01:25:22.980286 containerd[1710]: time="2026-03-11T01:25:22.979882449Z" level=info msg="RemovePodSandbox for \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\"" Mar 11 01:25:22.980286 containerd[1710]: time="2026-03-11T01:25:22.979912209Z" level=info msg="Forcibly stopping sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\"" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.012 [WARNING][6171] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19a36673-c7f4-4cd1-baa1-3ba9ae7cc00c", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-541af3988c", ContainerID:"60a383bdfe682ecab51df252e036ebd0e3401e92db46e9ce5fc048f74dddc4aa", Pod:"coredns-7d764666f9-bwhjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4baf5a675e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.012 [INFO][6171] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.012 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" iface="eth0" netns="" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.012 [INFO][6171] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.012 [INFO][6171] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.030 [INFO][6178] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.030 [INFO][6178] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.030 [INFO][6178] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.038 [WARNING][6178] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.038 [INFO][6178] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" HandleID="k8s-pod-network.38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Workload="ci--4081.3.6--n--541af3988c-k8s-coredns--7d764666f9--bwhjs-eth0" Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.040 [INFO][6178] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:25:23.044401 containerd[1710]: 2026-03-11 01:25:23.041 [INFO][6171] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a" Mar 11 01:25:23.044401 containerd[1710]: time="2026-03-11T01:25:23.043379528Z" level=info msg="TearDown network for sandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" successfully" Mar 11 01:25:23.053132 containerd[1710]: time="2026-03-11T01:25:23.053073156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:25:23.053238 containerd[1710]: time="2026-03-11T01:25:23.053194316Z" level=info msg="RemovePodSandbox \"38baa65819e89dbfc8a0a09996baa80c274fce903115f85961b6079c7697868a\" returns successfully" Mar 11 01:25:30.066030 kubelet[3163]: I0311 01:25:30.065982 3163 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:25:58.397894 systemd[1]: run-containerd-runc-k8s.io-1b2e9edbd117391b5b59225a9f118d0eb0f6fb25e0e7418bd746a759d9f1f50b-runc.NDbWzz.mount: Deactivated successfully. Mar 11 01:26:04.384520 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:53498.service - OpenSSH per-connection server daemon (10.200.16.10:53498). Mar 11 01:26:04.898912 sshd[6360]: Accepted publickey for core from 10.200.16.10 port 53498 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:04.901085 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:04.905522 systemd-logind[1683]: New session 10 of user core. Mar 11 01:26:04.912574 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 01:26:05.421325 sshd[6360]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:05.425336 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:53498.service: Deactivated successfully. Mar 11 01:26:05.427231 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 01:26:05.429383 systemd-logind[1683]: Session 10 logged out. Waiting for processes to exit. Mar 11 01:26:05.430853 systemd-logind[1683]: Removed session 10. Mar 11 01:26:10.508268 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:36746.service - OpenSSH per-connection server daemon (10.200.16.10:36746). Mar 11 01:26:11.003840 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 36746 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:11.004705 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:11.008644 systemd-logind[1683]: New session 11 of user core. Mar 11 01:26:11.016584 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 01:26:11.108100 systemd[1]: run-containerd-runc-k8s.io-8b6f7569f93ad62afd0e9a6a9b8d1cfc8d183be4b868a2e58bbd1ea8abc44983-runc.KP5FRx.mount: Deactivated successfully. Mar 11 01:26:11.413997 sshd[6391]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:11.417496 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:36746.service: Deactivated successfully. Mar 11 01:26:11.421112 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 01:26:11.422075 systemd-logind[1683]: Session 11 logged out. Waiting for processes to exit. Mar 11 01:26:11.422955 systemd-logind[1683]: Removed session 11. Mar 11 01:26:16.496534 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:36756.service - OpenSSH per-connection server daemon (10.200.16.10:36756). Mar 11 01:26:16.949003 sshd[6423]: Accepted publickey for core from 10.200.16.10 port 36756 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:16.950397 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:16.954842 systemd-logind[1683]: New session 12 of user core. Mar 11 01:26:16.958558 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 01:26:17.340884 sshd[6423]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:17.344257 systemd-logind[1683]: Session 12 logged out. Waiting for processes to exit. Mar 11 01:26:17.344749 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:36756.service: Deactivated successfully. Mar 11 01:26:17.347001 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 01:26:17.348144 systemd-logind[1683]: Removed session 12. Mar 11 01:26:22.433680 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:53748.service - OpenSSH per-connection server daemon (10.200.16.10:53748). Mar 11 01:26:22.924779 sshd[6438]: Accepted publickey for core from 10.200.16.10 port 53748 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:22.925640 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:22.929291 systemd-logind[1683]: New session 13 of user core. Mar 11 01:26:22.934573 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 01:26:23.348483 sshd[6438]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:23.352483 systemd-logind[1683]: Session 13 logged out. Waiting for processes to exit. Mar 11 01:26:23.353079 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:53748.service: Deactivated successfully. Mar 11 01:26:23.356111 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 01:26:23.358056 systemd-logind[1683]: Removed session 13. Mar 11 01:26:23.455667 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:53756.service - OpenSSH per-connection server daemon (10.200.16.10:53756). Mar 11 01:26:23.935040 sshd[6483]: Accepted publickey for core from 10.200.16.10 port 53756 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:23.935698 sshd[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:23.939481 systemd-logind[1683]: New session 14 of user core. Mar 11 01:26:23.945571 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 01:26:24.378451 sshd[6483]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:24.382077 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:53756.service: Deactivated successfully. Mar 11 01:26:24.386014 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 01:26:24.388741 systemd-logind[1683]: Session 14 logged out. Waiting for processes to exit. Mar 11 01:26:24.389627 systemd-logind[1683]: Removed session 14. Mar 11 01:26:24.467733 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:53768.service - OpenSSH per-connection server daemon (10.200.16.10:53768). Mar 11 01:26:24.932542 sshd[6494]: Accepted publickey for core from 10.200.16.10 port 53768 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:24.933358 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:24.936978 systemd-logind[1683]: New session 15 of user core. Mar 11 01:26:24.941621 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 01:26:25.327063 sshd[6494]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:25.330358 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:53768.service: Deactivated successfully. Mar 11 01:26:25.332384 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 01:26:25.333211 systemd-logind[1683]: Session 15 logged out. Waiting for processes to exit. Mar 11 01:26:25.334084 systemd-logind[1683]: Removed session 15. Mar 11 01:26:30.414333 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:56008.service - OpenSSH per-connection server daemon (10.200.16.10:56008). Mar 11 01:26:30.899388 sshd[6509]: Accepted publickey for core from 10.200.16.10 port 56008 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:30.900343 sshd[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:30.904795 systemd-logind[1683]: New session 16 of user core. Mar 11 01:26:30.909570 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 01:26:31.375919 sshd[6509]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:31.379148 systemd-logind[1683]: Session 16 logged out. Waiting for processes to exit. Mar 11 01:26:31.379499 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:56008.service: Deactivated successfully. Mar 11 01:26:31.381216 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 01:26:31.383419 systemd-logind[1683]: Removed session 16. Mar 11 01:26:31.452381 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:56010.service - OpenSSH per-connection server daemon (10.200.16.10:56010). Mar 11 01:26:31.951123 sshd[6552]: Accepted publickey for core from 10.200.16.10 port 56010 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:31.952538 sshd[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:31.957178 systemd-logind[1683]: New session 17 of user core. Mar 11 01:26:31.959792 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 01:26:32.503198 sshd[6552]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:32.506805 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:56010.service: Deactivated successfully. Mar 11 01:26:32.509162 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 01:26:32.510131 systemd-logind[1683]: Session 17 logged out. Waiting for processes to exit. Mar 11 01:26:32.511157 systemd-logind[1683]: Removed session 17. Mar 11 01:26:32.591966 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:56012.service - OpenSSH per-connection server daemon (10.200.16.10:56012). Mar 11 01:26:33.088459 sshd[6563]: Accepted publickey for core from 10.200.16.10 port 56012 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:33.089314 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:33.093689 systemd-logind[1683]: New session 18 of user core. Mar 11 01:26:33.102600 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 01:26:34.079942 sshd[6563]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:34.085625 systemd-logind[1683]: Session 18 logged out. Waiting for processes to exit. Mar 11 01:26:34.086021 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:56012.service: Deactivated successfully. Mar 11 01:26:34.088150 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 01:26:34.091270 systemd-logind[1683]: Removed session 18. Mar 11 01:26:34.169675 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:56020.service - OpenSSH per-connection server daemon (10.200.16.10:56020). Mar 11 01:26:34.659730 sshd[6587]: Accepted publickey for core from 10.200.16.10 port 56020 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:34.663539 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:34.668764 systemd-logind[1683]: New session 19 of user core. Mar 11 01:26:34.675607 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 01:26:35.183892 sshd[6587]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:35.187064 systemd-logind[1683]: Session 19 logged out. Waiting for processes to exit. Mar 11 01:26:35.188075 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:56020.service: Deactivated successfully. Mar 11 01:26:35.190419 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 01:26:35.193211 systemd-logind[1683]: Removed session 19. Mar 11 01:26:35.266854 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:56034.service - OpenSSH per-connection server daemon (10.200.16.10:56034). Mar 11 01:26:35.730681 sshd[6606]: Accepted publickey for core from 10.200.16.10 port 56034 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:35.732018 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:35.735903 systemd-logind[1683]: New session 20 of user core. Mar 11 01:26:35.743587 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 01:26:36.124509 sshd[6606]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:36.127966 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:56034.service: Deactivated successfully. Mar 11 01:26:36.130023 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 01:26:36.130928 systemd-logind[1683]: Session 20 logged out. Waiting for processes to exit. Mar 11 01:26:36.132233 systemd-logind[1683]: Removed session 20. Mar 11 01:26:41.208272 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:45216.service - OpenSSH per-connection server daemon (10.200.16.10:45216). Mar 11 01:26:41.673290 sshd[6696]: Accepted publickey for core from 10.200.16.10 port 45216 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:41.674783 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:41.678850 systemd-logind[1683]: New session 21 of user core. Mar 11 01:26:41.686612 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 01:26:42.074551 sshd[6696]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:42.078809 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:45216.service: Deactivated successfully. Mar 11 01:26:42.081042 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 01:26:42.082713 systemd-logind[1683]: Session 21 logged out. Waiting for processes to exit. Mar 11 01:26:42.083814 systemd-logind[1683]: Removed session 21. Mar 11 01:26:47.163119 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:45220.service - OpenSSH per-connection server daemon (10.200.16.10:45220). Mar 11 01:26:47.643937 sshd[6709]: Accepted publickey for core from 10.200.16.10 port 45220 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:47.645320 sshd[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:47.649074 systemd-logind[1683]: New session 22 of user core. Mar 11 01:26:47.657574 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 01:26:48.042804 sshd[6709]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:48.046544 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:45220.service: Deactivated successfully. Mar 11 01:26:48.048264 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 01:26:48.048961 systemd-logind[1683]: Session 22 logged out. Waiting for processes to exit. Mar 11 01:26:48.049842 systemd-logind[1683]: Removed session 22. Mar 11 01:26:53.133261 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:57870.service - OpenSSH per-connection server daemon (10.200.16.10:57870). Mar 11 01:26:53.635079 sshd[6742]: Accepted publickey for core from 10.200.16.10 port 57870 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:53.636552 sshd[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:53.641211 systemd-logind[1683]: New session 23 of user core. Mar 11 01:26:53.645582 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 01:26:54.049975 sshd[6742]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:54.053752 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:57870.service: Deactivated successfully. Mar 11 01:26:54.056198 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 01:26:54.057504 systemd-logind[1683]: Session 23 logged out. Waiting for processes to exit. Mar 11 01:26:54.058578 systemd-logind[1683]: Removed session 23. Mar 11 01:26:59.138916 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:57880.service - OpenSSH per-connection server daemon (10.200.16.10:57880). Mar 11 01:26:59.584259 sshd[6774]: Accepted publickey for core from 10.200.16.10 port 57880 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:59.585689 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:59.589422 systemd-logind[1683]: New session 24 of user core. Mar 11 01:26:59.596593 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 11 01:26:59.978116 sshd[6774]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:59.982712 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:57880.service: Deactivated successfully. Mar 11 01:26:59.984758 systemd[1]: session-24.scope: Deactivated successfully. Mar 11 01:26:59.985655 systemd-logind[1683]: Session 24 logged out. Waiting for processes to exit. Mar 11 01:26:59.986545 systemd-logind[1683]: Removed session 24. Mar 11 01:27:05.072683 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:38894.service - OpenSSH per-connection server daemon (10.200.16.10:38894). Mar 11 01:27:05.539118 sshd[6808]: Accepted publickey for core from 10.200.16.10 port 38894 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:27:05.539976 sshd[6808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:27:05.543792 systemd-logind[1683]: New session 25 of user core. Mar 11 01:27:05.551576 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 11 01:27:05.939673 sshd[6808]: pam_unix(sshd:session): session closed for user core Mar 11 01:27:05.942557 systemd-logind[1683]: Session 25 logged out. Waiting for processes to exit. Mar 11 01:27:05.942653 systemd[1]: session-25.scope: Deactivated successfully. Mar 11 01:27:05.943870 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:38894.service: Deactivated successfully. Mar 11 01:27:05.946352 systemd-logind[1683]: Removed session 25.