Mar 7 01:23:54.169894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 01:23:54.169915 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 01:23:54.169923 kernel: KASLR enabled Mar 7 01:23:54.169929 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 01:23:54.169936 kernel: printk: bootconsole [pl11] enabled Mar 7 01:23:54.169942 kernel: efi: EFI v2.7 by EDK II Mar 7 01:23:54.169949 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 7 01:23:54.169955 kernel: random: crng init done Mar 7 01:23:54.169961 kernel: ACPI: Early table checksum verification disabled Mar 7 01:23:54.169967 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 7 01:23:54.169973 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.169979 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.169986 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:23:54.169992 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170000 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170006 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170013 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170020 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170027 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170033 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 01:23:54.170040 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170046 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 01:23:54.170052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 7 01:23:54.170059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 7 01:23:54.170065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 7 01:23:54.170071 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 7 01:23:54.170078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 7 01:23:54.170084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 7 01:23:54.170092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 7 01:23:54.170098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 7 01:23:54.170105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 7 01:23:54.170111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 7 01:23:54.170117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 7 01:23:54.170124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 7 01:23:54.170130 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Mar 7 01:23:54.170136 kernel: Zone ranges: Mar 7 01:23:54.170143 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 01:23:54.170149 kernel: DMA32 empty Mar 7 01:23:54.170155 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:23:54.170161 kernel: Movable zone start for each node Mar 7 01:23:54.170172 kernel: Early memory node ranges Mar 7 01:23:54.170178 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 01:23:54.170185 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 7 01:23:54.170192 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 7 01:23:54.170203 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 7 01:23:54.170212 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 7 01:23:54.170219 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 7 01:23:54.170226 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:23:54.170232 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 01:23:54.170239 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 01:23:54.170246 kernel: psci: probing for conduit method from ACPI. Mar 7 01:23:54.170253 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 01:23:54.170259 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 01:23:54.170266 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 01:23:54.170273 kernel: psci: SMC Calling Convention v1.4 Mar 7 01:23:54.170280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 01:23:54.170286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 01:23:54.170294 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 01:23:54.170301 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 01:23:54.170308 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 01:23:54.170314 kernel: Detected PIPT I-cache on CPU0 Mar 7 01:23:54.170321 kernel: CPU features: detected: GIC system register CPU interface Mar 7 01:23:54.170328 kernel: CPU features: detected: Hardware dirty bit management Mar 7 01:23:54.170335 kernel: CPU features: detected: Spectre-BHB Mar 7 01:23:54.170342 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 01:23:54.170348 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 01:23:54.170355 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 01:23:54.170362 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 7 01:23:54.170370 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 01:23:54.170377 kernel: alternatives: applying boot alternatives Mar 7 01:23:54.170385 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:23:54.170392 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:23:54.170399 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:23:54.170405 kernel: Fallback order for Node 0: 0 Mar 7 01:23:54.170412 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 7 01:23:54.170419 kernel: Policy zone: Normal Mar 7 01:23:54.170426 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:23:54.170432 kernel: software IO TLB: area num 2. Mar 7 01:23:54.170439 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 7 01:23:54.170448 kernel: Memory: 3982640K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211520K reserved, 0K cma-reserved) Mar 7 01:23:54.170455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:23:54.170461 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:23:54.170468 kernel: rcu: RCU event tracing is enabled. Mar 7 01:23:54.170475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:23:54.170482 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:23:54.170489 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:23:54.170496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:23:54.170503 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:23:54.170509 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 01:23:54.170516 kernel: GICv3: 960 SPIs implemented Mar 7 01:23:54.170524 kernel: GICv3: 0 Extended SPIs implemented Mar 7 01:23:54.170531 kernel: Root IRQ handler: gic_handle_irq Mar 7 01:23:54.170537 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 01:23:54.170544 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 01:23:54.170551 kernel: ITS: No ITS available, not enabling LPIs Mar 7 01:23:54.170558 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:23:54.170565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:23:54.170572 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 01:23:54.170579 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 01:23:54.170585 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 01:23:54.170592 kernel: Console: colour dummy device 80x25 Mar 7 01:23:54.170601 kernel: printk: console [tty1] enabled Mar 7 01:23:54.170608 kernel: ACPI: Core revision 20230628 Mar 7 01:23:54.170615 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 01:23:54.170622 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:23:54.170629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:23:54.170636 kernel: landlock: Up and running. Mar 7 01:23:54.170643 kernel: SELinux: Initializing. Mar 7 01:23:54.170650 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.170657 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.170665 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:23:54.170672 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:23:54.170679 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 7 01:23:54.170686 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 7 01:23:54.170693 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:23:54.170700 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:23:54.170707 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:23:54.170714 kernel: Remapping and enabling EFI services. Mar 7 01:23:54.170727 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:23:54.170734 kernel: Detected PIPT I-cache on CPU1 Mar 7 01:23:54.170741 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 01:23:54.170748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:23:54.170757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 01:23:54.170764 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:23:54.170771 kernel: SMP: Total of 2 processors activated. Mar 7 01:23:54.170779 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 01:23:54.170786 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 01:23:54.170795 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 01:23:54.170803 kernel: CPU features: detected: CRC32 instructions Mar 7 01:23:54.170810 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 01:23:54.170817 kernel: CPU features: detected: LSE atomic instructions Mar 7 01:23:54.170824 kernel: CPU features: detected: Privileged Access Never Mar 7 01:23:54.170831 kernel: CPU: All CPU(s) started at EL1 Mar 7 01:23:54.170839 kernel: alternatives: applying system-wide alternatives Mar 7 01:23:54.170846 kernel: devtmpfs: initialized Mar 7 01:23:54.170853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:23:54.170862 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:23:54.170870 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:23:54.170877 kernel: SMBIOS 3.1.0 present. Mar 7 01:23:54.170884 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 7 01:23:54.170892 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:23:54.170899 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 01:23:54.170907 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 01:23:54.170914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 01:23:54.170921 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:23:54.170930 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 7 01:23:54.170937 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:23:54.170945 kernel: cpuidle: using governor menu Mar 7 01:23:54.170952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 01:23:54.170959 kernel: ASID allocator initialised with 32768 entries Mar 7 01:23:54.170966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:23:54.170974 kernel: Serial: AMBA PL011 UART driver Mar 7 01:23:54.170981 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 01:23:54.170988 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 01:23:54.170997 kernel: Modules: 509008 pages in range for PLT usage Mar 7 01:23:54.171004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:23:54.171011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:23:54.171019 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 01:23:54.171026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 01:23:54.171033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:23:54.171040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:23:54.171048 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 01:23:54.171055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 01:23:54.171063 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:23:54.171071 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:23:54.171078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:23:54.171085 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:23:54.171093 kernel: ACPI: Interpreter enabled Mar 7 01:23:54.171100 kernel: ACPI: Using GIC for interrupt routing Mar 7 01:23:54.171107 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 01:23:54.171114 kernel: printk: console [ttyAMA0] enabled Mar 7 01:23:54.171122 kernel: printk: bootconsole [pl11] disabled Mar 7 01:23:54.171130 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 01:23:54.171137 kernel: iommu: Default domain type: Translated Mar 7 01:23:54.171145 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 01:23:54.171152 kernel: efivars: Registered efivars operations Mar 7 01:23:54.171159 kernel: vgaarb: loaded Mar 7 01:23:54.171167 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 01:23:54.171174 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:23:54.171181 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:23:54.171188 kernel: pnp: PnP ACPI init Mar 7 01:23:54.171197 kernel: pnp: PnP ACPI: found 0 devices Mar 7 01:23:54.171208 kernel: NET: Registered PF_INET protocol family Mar 7 01:23:54.171215 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:23:54.171223 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:23:54.171230 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:23:54.171237 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:23:54.171245 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:23:54.171252 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:23:54.171259 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.171268 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.171275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:23:54.171283 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:23:54.171290 kernel: kvm [1]: HYP mode not available Mar 7 01:23:54.171297 kernel: Initialise system trusted keyrings Mar 7 01:23:54.171304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:23:54.171311 kernel: Key type asymmetric registered Mar 7 01:23:54.171319 kernel: Asymmetric key parser 'x509' registered Mar 7 01:23:54.171326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:23:54.171334 kernel: io scheduler mq-deadline registered Mar 7 01:23:54.171342 kernel: io scheduler kyber registered Mar 7 01:23:54.171349 kernel: io scheduler bfq registered Mar 7 01:23:54.171356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:23:54.171363 kernel: thunder_xcv, ver 1.0 Mar 7 01:23:54.171370 kernel: thunder_bgx, ver 1.0 Mar 7 01:23:54.171377 kernel: nicpf, ver 1.0 Mar 7 01:23:54.171384 kernel: nicvf, ver 1.0 Mar 7 01:23:54.171499 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 01:23:54.171572 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T01:23:53 UTC (1772846633) Mar 7 01:23:54.171582 kernel: efifb: probing for efifb Mar 7 01:23:54.171590 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:23:54.171597 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:23:54.171605 kernel: efifb: scrolling: redraw Mar 7 01:23:54.171612 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:23:54.171619 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:23:54.171626 kernel: fb0: EFI VGA frame buffer device Mar 7 01:23:54.171636 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 01:23:54.171643 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:23:54.171650 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 7 01:23:54.171658 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 01:23:54.171665 kernel: watchdog: Hard watchdog permanently disabled Mar 7 01:23:54.171672 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:23:54.171680 kernel: Segment Routing with IPv6 Mar 7 01:23:54.171687 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:23:54.171694 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:23:54.171702 kernel: Key type dns_resolver registered Mar 7 01:23:54.171710 kernel: registered taskstats version 1 Mar 7 01:23:54.171717 kernel: Loading compiled-in X.509 certificates Mar 7 01:23:54.171724 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 01:23:54.171732 kernel: Key type .fscrypt registered Mar 7 01:23:54.171739 kernel: Key type fscrypt-provisioning registered Mar 7 01:23:54.171746 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:23:54.171753 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:23:54.171760 kernel: ima: No architecture policies found Mar 7 01:23:54.171769 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 01:23:54.171776 kernel: clk: Disabling unused clocks Mar 7 01:23:54.171784 kernel: Freeing unused kernel memory: 39424K Mar 7 01:23:54.171791 kernel: Run /init as init process Mar 7 01:23:54.171798 kernel: with arguments: Mar 7 01:23:54.171805 kernel: /init Mar 7 01:23:54.171812 kernel: with environment: Mar 7 01:23:54.171819 kernel: HOME=/ Mar 7 01:23:54.171826 kernel: TERM=linux Mar 7 01:23:54.171836 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:23:54.171846 systemd[1]: Detected virtualization microsoft. Mar 7 01:23:54.171854 systemd[1]: Detected architecture arm64. Mar 7 01:23:54.171862 systemd[1]: Running in initrd. Mar 7 01:23:54.171869 systemd[1]: No hostname configured, using default hostname. Mar 7 01:23:54.171877 systemd[1]: Hostname set to . Mar 7 01:23:54.171885 systemd[1]: Initializing machine ID from random generator. Mar 7 01:23:54.171894 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:23:54.171902 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:23:54.171910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:23:54.171918 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:23:54.171926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:23:54.171934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:23:54.171942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:23:54.171951 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:23:54.171961 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:23:54.171969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:23:54.171977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:23:54.171984 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:23:54.171992 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:23:54.172000 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:23:54.172008 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:23:54.172016 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:23:54.172025 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:23:54.172033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:23:54.172041 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:23:54.172049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:23:54.172057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:23:54.172064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:23:54.172072 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:23:54.172080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:23:54.172089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:23:54.172097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:23:54.172105 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:23:54.172113 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:23:54.172133 systemd-journald[217]: Collecting audit messages is disabled. Mar 7 01:23:54.172153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:23:54.172162 systemd-journald[217]: Journal started Mar 7 01:23:54.172180 systemd-journald[217]: Runtime Journal (/run/log/journal/de3d367f58564545a4cf570ad517df36) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:23:54.186488 systemd-modules-load[218]: Inserted module 'overlay' Mar 7 01:23:54.191534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:54.205378 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:23:54.206845 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:23:54.225641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:23:54.235272 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:23:54.235296 kernel: Bridge firewalling registered Mar 7 01:23:54.234123 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 7 01:23:54.239602 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:23:54.250489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:23:54.255527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:54.272407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:54.279334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:23:54.299353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:23:54.326385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:23:54.333226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:54.345218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:23:54.360220 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:23:54.368875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:23:54.396403 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:23:54.406675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:23:54.431640 dracut-cmdline[252]: dracut-dracut-053 Mar 7 01:23:54.431640 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:23:54.412323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:23:54.435702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:23:54.492105 systemd-resolved[255]: Positive Trust Anchors: Mar 7 01:23:54.495477 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:23:54.495511 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:23:54.497801 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 7 01:23:54.498595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:23:54.503800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:23:54.561214 kernel: SCSI subsystem initialized Mar 7 01:23:54.568220 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:23:54.580216 kernel: iscsi: registered transport (tcp) Mar 7 01:23:54.592209 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:23:54.595218 kernel: QLogic iSCSI HBA Driver Mar 7 01:23:54.631970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:23:54.649367 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:23:54.678486 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:23:54.678558 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:23:54.683613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:23:54.734223 kernel: raid6: neonx8 gen() 15824 MB/s Mar 7 01:23:54.750207 kernel: raid6: neonx4 gen() 15704 MB/s Mar 7 01:23:54.769209 kernel: raid6: neonx2 gen() 13243 MB/s Mar 7 01:23:54.789215 kernel: raid6: neonx1 gen() 10483 MB/s Mar 7 01:23:54.808211 kernel: raid6: int64x8 gen() 6981 MB/s Mar 7 01:23:54.828213 kernel: raid6: int64x4 gen() 7353 MB/s Mar 7 01:23:54.848216 kernel: raid6: int64x2 gen() 6146 MB/s Mar 7 01:23:54.870016 kernel: raid6: int64x1 gen() 5071 MB/s Mar 7 01:23:54.870071 kernel: raid6: using algorithm neonx8 gen() 15824 MB/s Mar 7 01:23:54.893334 kernel: raid6: .... xor() 12048 MB/s, rmw enabled Mar 7 01:23:54.893392 kernel: raid6: using neon recovery algorithm Mar 7 01:23:54.903242 kernel: xor: measuring software checksum speed Mar 7 01:23:54.903274 kernel: 8regs : 19764 MB/sec Mar 7 01:23:54.906508 kernel: 32regs : 19660 MB/sec Mar 7 01:23:54.910443 kernel: arm64_neon : 27034 MB/sec Mar 7 01:23:54.913890 kernel: xor: using function: arm64_neon (27034 MB/sec) Mar 7 01:23:54.964228 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:23:54.974453 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:23:54.991397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:23:55.011779 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 7 01:23:55.016612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:23:55.032432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:23:55.053709 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 7 01:23:55.081380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:23:55.093412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:23:55.129698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:23:55.148392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:23:55.169248 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:23:55.180802 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:23:55.192710 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:23:55.203019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:23:55.216393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:23:55.231672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:23:55.241351 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 01:23:55.235904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:55.246990 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:55.264996 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:23:55.265016 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:23:55.265026 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:23:55.257617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:23:55.257834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.314239 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:23:55.314269 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:23:55.314279 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:23:55.314289 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:23:55.314299 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:23:55.314308 kernel: scsi host1: storvsc_host_t Mar 7 01:23:55.283532 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.341966 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:23:55.342120 kernel: scsi host0: storvsc_host_t Mar 7 01:23:55.342239 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:23:55.342263 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:23:55.313867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.352244 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:23:55.359703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.371579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:23:55.371643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.377594 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.400465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.418593 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: VF slot 1 added Mar 7 01:23:55.418726 kernel: PTP clock support registered Mar 7 01:23:55.427823 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:23:55.427859 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:23:55.430216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:23:55.430245 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:23:55.430256 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:23:55.619609 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 7 01:23:55.648505 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:23:55.648533 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:23:55.648701 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:23:55.652852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.666721 kernel: hv_pci f6f50c91-c1cd-4e95-b23d-8f40a76741ff: PCI VMBus probing: Using version 0x10004 Mar 7 01:23:55.666855 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:23:55.667314 kernel: hv_pci f6f50c91-c1cd-4e95-b23d-8f40a76741ff: PCI host bridge to bus c1cd:00 Mar 7 01:23:55.677386 kernel: pci_bus c1cd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 01:23:55.682499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:55.698568 kernel: pci_bus c1cd:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:23:55.712635 kernel: pci c1cd:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 7 01:23:55.712713 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:23:55.712880 kernel: pci c1cd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:23:55.712897 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:23:55.718569 kernel: pci c1cd:00:02.0: enabling Extended Tags Mar 7 01:23:55.718612 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:23:55.729147 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:23:55.729323 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:23:55.740431 kernel: pci c1cd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c1cd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 7 01:23:55.740476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:23:55.749238 kernel: pci_bus c1cd:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:23:55.749378 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:23:55.756618 kernel: pci c1cd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:23:55.756762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:23:55.778916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:55.803316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#126 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:23:55.824136 kernel: mlx5_core c1cd:00:02.0: enabling device (0000 -> 0002) Mar 7 01:23:55.829304 kernel: mlx5_core c1cd:00:02.0: firmware version: 16.30.5026 Mar 7 01:23:56.027605 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: VF registering: eth1 Mar 7 01:23:56.027797 kernel: mlx5_core c1cd:00:02.0 eth1: joined to eth0 Mar 7 01:23:56.035316 kernel: mlx5_core c1cd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 01:23:56.045309 kernel: mlx5_core c1cd:00:02.0 enP49613s1: renamed from eth1 Mar 7 01:23:56.278320 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Mar 7 01:23:56.293029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:23:56.304858 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:23:56.377315 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (500) Mar 7 01:23:56.390172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:23:56.395641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:23:56.422547 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:23:56.452101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:23:57.448306 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:23:57.448360 disk-uuid[606]: The operation has completed successfully. Mar 7 01:23:57.512936 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:23:57.517336 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:23:57.542407 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:23:57.552277 sh[695]: Success Mar 7 01:23:57.582323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 01:23:57.841307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:23:57.859413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:23:57.865826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:23:57.896340 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 01:23:57.896379 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:57.901896 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:23:57.906436 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:23:57.909854 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:23:58.178202 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:23:58.182493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:23:58.204504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:23:58.209416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:23:58.245850 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:58.245898 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:58.249307 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:23:58.288600 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:23:58.301365 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:23:58.305742 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:58.314163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:23:58.319629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:23:58.335591 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:23:58.346342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:23:58.381461 systemd-networkd[879]: lo: Link UP Mar 7 01:23:58.381468 systemd-networkd[879]: lo: Gained carrier Mar 7 01:23:58.382952 systemd-networkd[879]: Enumeration completed Mar 7 01:23:58.384250 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:23:58.387867 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:23:58.387870 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:23:58.392451 systemd[1]: Reached target network.target - Network. Mar 7 01:23:58.467317 kernel: mlx5_core c1cd:00:02.0 enP49613s1: Link up Mar 7 01:23:58.505308 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: Data path switched to VF: enP49613s1 Mar 7 01:23:58.505945 systemd-networkd[879]: enP49613s1: Link UP Mar 7 01:23:58.506031 systemd-networkd[879]: eth0: Link UP Mar 7 01:23:58.506123 systemd-networkd[879]: eth0: Gained carrier Mar 7 01:23:58.506131 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:23:58.515717 systemd-networkd[879]: enP49613s1: Gained carrier Mar 7 01:23:58.538345 systemd-networkd[879]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:23:59.216031 ignition[878]: Ignition 2.19.0 Mar 7 01:23:59.216044 ignition[878]: Stage: fetch-offline Mar 7 01:23:59.220072 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:23:59.216079 ignition[878]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.238412 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:23:59.216086 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.216170 ignition[878]: parsed url from cmdline: "" Mar 7 01:23:59.216173 ignition[878]: no config URL provided Mar 7 01:23:59.216177 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:23:59.216183 ignition[878]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:23:59.216187 ignition[878]: failed to fetch config: resource requires networking Mar 7 01:23:59.216592 ignition[878]: Ignition finished successfully Mar 7 01:23:59.256833 ignition[887]: Ignition 2.19.0 Mar 7 01:23:59.256840 ignition[887]: Stage: fetch Mar 7 01:23:59.257045 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.257055 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.257146 ignition[887]: parsed url from cmdline: "" Mar 7 01:23:59.257149 ignition[887]: no config URL provided Mar 7 01:23:59.257154 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:23:59.257163 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:23:59.257186 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:23:59.382994 ignition[887]: GET result: OK Mar 7 01:23:59.383053 ignition[887]: config has been read from IMDS userdata Mar 7 01:23:59.383097 ignition[887]: parsing config with SHA512: a5f4aa03ed659d1407a491584435858a745d1da261a47c47030f81121e55125e9186985db7e1b9407d533b278504f1f9fa04c8403962328ea9e4ef8b386c8bdf Mar 7 01:23:59.386770 unknown[887]: fetched base config from "system" Mar 7 01:23:59.387151 ignition[887]: fetch: fetch complete Mar 7 01:23:59.386777 unknown[887]: fetched base config from "system" Mar 7 01:23:59.387156 ignition[887]: fetch: fetch passed Mar 7 01:23:59.386782 unknown[887]: fetched user config from "azure" Mar 7 01:23:59.387193 ignition[887]: Ignition finished successfully Mar 7 01:23:59.390468 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:23:59.408419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:23:59.426438 ignition[893]: Ignition 2.19.0 Mar 7 01:23:59.430816 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:23:59.426444 ignition[893]: Stage: kargs Mar 7 01:23:59.426618 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.426627 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.427529 ignition[893]: kargs: kargs passed Mar 7 01:23:59.427572 ignition[893]: Ignition finished successfully Mar 7 01:23:59.452533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:23:59.469519 ignition[899]: Ignition 2.19.0 Mar 7 01:23:59.469527 ignition[899]: Stage: disks Mar 7 01:23:59.473027 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:23:59.469703 ignition[899]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.479965 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:23:59.469712 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.489163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:23:59.471702 ignition[899]: disks: disks passed Mar 7 01:23:59.498159 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:23:59.471760 ignition[899]: Ignition finished successfully Mar 7 01:23:59.507326 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:23:59.516278 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:23:59.536571 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:23:59.615980 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:23:59.624486 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:23:59.639495 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:23:59.689634 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 01:23:59.689379 systemd-networkd[879]: eth0: Gained IPv6LL Mar 7 01:23:59.690576 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:23:59.697431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:23:59.738369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:23:59.756310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (919) Mar 7 01:23:59.767550 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:59.767593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:59.771460 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:23:59.778317 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:23:59.778435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:23:59.785471 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:23:59.793020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:23:59.793052 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:23:59.806316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:23:59.818880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:23:59.841538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:24:00.291176 coreos-metadata[936]: Mar 07 01:24:00.291 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:24:00.298134 coreos-metadata[936]: Mar 07 01:24:00.298 INFO Fetch successful Mar 7 01:24:00.298134 coreos-metadata[936]: Mar 07 01:24:00.298 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:24:00.312456 coreos-metadata[936]: Mar 07 01:24:00.312 INFO Fetch successful Mar 7 01:24:00.329318 coreos-metadata[936]: Mar 07 01:24:00.329 INFO wrote hostname ci-4081.3.6-n-0072e04abc to /sysroot/etc/hostname Mar 7 01:24:00.336569 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:24:00.635037 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:24:00.691517 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:24:00.715114 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:24:00.722197 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:24:01.998149 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:24:02.010485 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:24:02.022469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:24:02.038613 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:24:02.033270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:24:02.063572 ignition[1038]: INFO : Ignition 2.19.0 Mar 7 01:24:02.063572 ignition[1038]: INFO : Stage: mount Mar 7 01:24:02.071643 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:02.071643 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:02.071643 ignition[1038]: INFO : mount: mount passed Mar 7 01:24:02.071643 ignition[1038]: INFO : Ignition finished successfully Mar 7 01:24:02.068654 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:24:02.093394 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:24:02.104987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:24:02.114943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:24:02.138312 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1050) Mar 7 01:24:02.149510 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:24:02.149524 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:24:02.152968 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:24:02.160305 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:24:02.162324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:24:02.189564 ignition[1067]: INFO : Ignition 2.19.0 Mar 7 01:24:02.189564 ignition[1067]: INFO : Stage: files Mar 7 01:24:02.189564 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:02.189564 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:02.189564 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:24:02.210214 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:24:02.210214 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:24:02.296733 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:24:02.303350 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:24:02.303350 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:24:02.303119 unknown[1067]: wrote ssh authorized keys file for user: core Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 01:24:02.370274 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:24:02.476318 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:24:02.476318 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 01:24:03.086601 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:24:03.553160 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:03.553160 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:24:03.602230 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: files passed Mar 7 01:24:03.611922 ignition[1067]: INFO : Ignition finished successfully Mar 7 01:24:03.619091 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:24:03.654570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:24:03.666459 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:24:03.735684 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.735684 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.689420 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:24:03.760629 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.689529 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:24:03.706859 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:24:03.712591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:24:03.738442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:24:03.769498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:24:03.771332 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:24:03.778828 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:24:03.788421 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:24:03.797934 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:24:03.814546 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:24:03.844707 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:24:03.861585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:24:03.877787 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:24:03.888041 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:24:03.898367 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:24:03.906495 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:24:03.906664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:24:03.921798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:24:03.930642 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:24:03.938432 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:24:03.947940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:24:03.958117 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:24:03.968108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:24:03.972726 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:24:03.982590 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:24:03.992159 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:24:04.001621 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:24:04.009432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:24:04.009605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:24:04.022221 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:24:04.031545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:24:04.041272 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:24:04.041385 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:24:04.052242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:24:04.052424 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:24:04.069186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:24:04.069369 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:24:04.078875 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:24:04.079020 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:24:04.087592 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:24:04.087737 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:24:04.114414 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:24:04.140606 ignition[1119]: INFO : Ignition 2.19.0 Mar 7 01:24:04.140606 ignition[1119]: INFO : Stage: umount Mar 7 01:24:04.140606 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:04.140606 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:04.140606 ignition[1119]: INFO : umount: umount passed Mar 7 01:24:04.140606 ignition[1119]: INFO : Ignition finished successfully Mar 7 01:24:04.122429 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:24:04.122632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:24:04.145481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:24:04.156715 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:24:04.156857 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:24:04.166479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:24:04.166581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:24:04.187787 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:24:04.188386 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:24:04.188472 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:24:04.193244 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:24:04.193347 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:24:04.203383 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:24:04.203421 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:24:04.212068 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:24:04.212101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:24:04.221473 systemd[1]: Stopped target network.target - Network. Mar 7 01:24:04.229056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:24:04.229098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:24:04.238399 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:24:04.247323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:24:04.256325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:24:04.261974 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:24:04.269603 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:24:04.277448 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:24:04.277498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:24:04.285717 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:24:04.285768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:24:04.293827 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:24:04.293865 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:24:04.302073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:24:04.302103 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:24:04.310840 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:24:04.319160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:24:04.328108 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:24:04.328192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:24:04.337353 systemd-networkd[879]: eth0: DHCPv6 lease lost Mar 7 01:24:04.342357 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:24:04.342472 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:24:04.352766 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:24:04.352855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:24:04.532316 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: Data path switched from VF: enP49613s1 Mar 7 01:24:04.363619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:24:04.363672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:24:04.392475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:24:04.400705 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:24:04.400774 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:24:04.410081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:24:04.410122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:24:04.418673 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:24:04.418707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:24:04.427235 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:24:04.427271 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:24:04.436828 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:24:04.479958 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:24:04.480166 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:24:04.490207 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:24:04.490252 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:24:04.497825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:24:04.497850 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:24:04.506680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:24:04.506720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:24:04.527242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:24:04.527294 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:24:04.540872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:24:04.540923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:24:04.570512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:24:04.581368 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:24:04.581437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:24:04.591984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:24:04.592028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:24:04.602127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:24:04.602330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:24:04.633153 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:24:04.633280 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:24:06.649931 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:24:06.650036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:24:06.654585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:24:06.662805 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:24:06.662858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:24:06.685477 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:24:06.757927 systemd[1]: Switching root. Mar 7 01:24:06.786469 systemd-journald[217]: Journal stopped Mar 7 01:23:54.169894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 01:23:54.169915 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 01:23:54.169923 kernel: KASLR enabled Mar 7 01:23:54.169929 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 01:23:54.169936 kernel: printk: bootconsole [pl11] enabled Mar 7 01:23:54.169942 kernel: efi: EFI v2.7 by EDK II Mar 7 01:23:54.169949 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 7 01:23:54.169955 kernel: random: crng init done Mar 7 01:23:54.169961 kernel: ACPI: Early table checksum verification disabled Mar 7 01:23:54.169967 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 7 01:23:54.169973 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.169979 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.169986 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:23:54.169992 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170000 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170006 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170013 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170020 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170027 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170033 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 01:23:54.170040 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:23:54.170046 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 01:23:54.170052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 7 01:23:54.170059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 7 01:23:54.170065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 7 01:23:54.170071 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 7 01:23:54.170078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 7 01:23:54.170084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 7 01:23:54.170092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 7 01:23:54.170098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 7 01:23:54.170105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 7 01:23:54.170111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 7 01:23:54.170117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 7 01:23:54.170124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 7 01:23:54.170130 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Mar 7 01:23:54.170136 kernel: Zone ranges: Mar 7 01:23:54.170143 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 01:23:54.170149 kernel: DMA32 empty Mar 7 01:23:54.170155 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:23:54.170161 kernel: Movable zone start for each node Mar 7 01:23:54.170172 kernel: Early memory node ranges Mar 7 01:23:54.170178 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 01:23:54.170185 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 7 01:23:54.170192 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 7 01:23:54.170203 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 7 01:23:54.170212 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 7 01:23:54.170219 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 7 01:23:54.170226 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:23:54.170232 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 01:23:54.170239 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 01:23:54.170246 kernel: psci: probing for conduit method from ACPI. Mar 7 01:23:54.170253 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 01:23:54.170259 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 01:23:54.170266 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 01:23:54.170273 kernel: psci: SMC Calling Convention v1.4 Mar 7 01:23:54.170280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 01:23:54.170286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 01:23:54.170294 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 01:23:54.170301 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 01:23:54.170308 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 01:23:54.170314 kernel: Detected PIPT I-cache on CPU0 Mar 7 01:23:54.170321 kernel: CPU features: detected: GIC system register CPU interface Mar 7 01:23:54.170328 kernel: CPU features: detected: Hardware dirty bit management Mar 7 01:23:54.170335 kernel: CPU features: detected: Spectre-BHB Mar 7 01:23:54.170342 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 01:23:54.170348 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 01:23:54.170355 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 01:23:54.170362 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 7 01:23:54.170370 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 01:23:54.170377 kernel: alternatives: applying boot alternatives Mar 7 01:23:54.170385 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:23:54.170392 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:23:54.170399 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:23:54.170405 kernel: Fallback order for Node 0: 0 Mar 7 01:23:54.170412 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 7 01:23:54.170419 kernel: Policy zone: Normal Mar 7 01:23:54.170426 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:23:54.170432 kernel: software IO TLB: area num 2. Mar 7 01:23:54.170439 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 7 01:23:54.170448 kernel: Memory: 3982640K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211520K reserved, 0K cma-reserved) Mar 7 01:23:54.170455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:23:54.170461 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:23:54.170468 kernel: rcu: RCU event tracing is enabled. Mar 7 01:23:54.170475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:23:54.170482 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:23:54.170489 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:23:54.170496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:23:54.170503 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:23:54.170509 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 01:23:54.170516 kernel: GICv3: 960 SPIs implemented Mar 7 01:23:54.170524 kernel: GICv3: 0 Extended SPIs implemented Mar 7 01:23:54.170531 kernel: Root IRQ handler: gic_handle_irq Mar 7 01:23:54.170537 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 01:23:54.170544 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 01:23:54.170551 kernel: ITS: No ITS available, not enabling LPIs Mar 7 01:23:54.170558 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:23:54.170565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:23:54.170572 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 01:23:54.170579 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 01:23:54.170585 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 01:23:54.170592 kernel: Console: colour dummy device 80x25 Mar 7 01:23:54.170601 kernel: printk: console [tty1] enabled Mar 7 01:23:54.170608 kernel: ACPI: Core revision 20230628 Mar 7 01:23:54.170615 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 01:23:54.170622 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:23:54.170629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:23:54.170636 kernel: landlock: Up and running. Mar 7 01:23:54.170643 kernel: SELinux: Initializing. Mar 7 01:23:54.170650 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.170657 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.170665 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:23:54.170672 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:23:54.170679 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 7 01:23:54.170686 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 7 01:23:54.170693 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:23:54.170700 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:23:54.170707 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:23:54.170714 kernel: Remapping and enabling EFI services. Mar 7 01:23:54.170727 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:23:54.170734 kernel: Detected PIPT I-cache on CPU1 Mar 7 01:23:54.170741 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 01:23:54.170748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:23:54.170757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 01:23:54.170764 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:23:54.170771 kernel: SMP: Total of 2 processors activated. Mar 7 01:23:54.170779 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 01:23:54.170786 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 01:23:54.170795 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 01:23:54.170803 kernel: CPU features: detected: CRC32 instructions Mar 7 01:23:54.170810 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 01:23:54.170817 kernel: CPU features: detected: LSE atomic instructions Mar 7 01:23:54.170824 kernel: CPU features: detected: Privileged Access Never Mar 7 01:23:54.170831 kernel: CPU: All CPU(s) started at EL1 Mar 7 01:23:54.170839 kernel: alternatives: applying system-wide alternatives Mar 7 01:23:54.170846 kernel: devtmpfs: initialized Mar 7 01:23:54.170853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:23:54.170862 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:23:54.170870 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:23:54.170877 kernel: SMBIOS 3.1.0 present. Mar 7 01:23:54.170884 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 7 01:23:54.170892 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:23:54.170899 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 01:23:54.170907 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 01:23:54.170914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 01:23:54.170921 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:23:54.170930 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 7 01:23:54.170937 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:23:54.170945 kernel: cpuidle: using governor menu Mar 7 01:23:54.170952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 01:23:54.170959 kernel: ASID allocator initialised with 32768 entries Mar 7 01:23:54.170966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:23:54.170974 kernel: Serial: AMBA PL011 UART driver Mar 7 01:23:54.170981 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 01:23:54.170988 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 01:23:54.170997 kernel: Modules: 509008 pages in range for PLT usage Mar 7 01:23:54.171004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:23:54.171011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:23:54.171019 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 01:23:54.171026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 01:23:54.171033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:23:54.171040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:23:54.171048 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 01:23:54.171055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 01:23:54.171063 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:23:54.171071 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:23:54.171078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:23:54.171085 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:23:54.171093 kernel: ACPI: Interpreter enabled Mar 7 01:23:54.171100 kernel: ACPI: Using GIC for interrupt routing Mar 7 01:23:54.171107 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 01:23:54.171114 kernel: printk: console [ttyAMA0] enabled Mar 7 01:23:54.171122 kernel: printk: bootconsole [pl11] disabled Mar 7 01:23:54.171130 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 01:23:54.171137 kernel: iommu: Default domain type: Translated Mar 7 01:23:54.171145 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 01:23:54.171152 kernel: efivars: Registered efivars operations Mar 7 01:23:54.171159 kernel: vgaarb: loaded Mar 7 01:23:54.171167 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 01:23:54.171174 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:23:54.171181 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:23:54.171188 kernel: pnp: PnP ACPI init Mar 7 01:23:54.171197 kernel: pnp: PnP ACPI: found 0 devices Mar 7 01:23:54.171208 kernel: NET: Registered PF_INET protocol family Mar 7 01:23:54.171215 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:23:54.171223 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:23:54.171230 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:23:54.171237 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:23:54.171245 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:23:54.171252 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:23:54.171259 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.171268 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:23:54.171275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:23:54.171283 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:23:54.171290 kernel: kvm [1]: HYP mode not available Mar 7 01:23:54.171297 kernel: Initialise system trusted keyrings Mar 7 01:23:54.171304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:23:54.171311 kernel: Key type asymmetric registered Mar 7 01:23:54.171319 kernel: Asymmetric key parser 'x509' registered Mar 7 01:23:54.171326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:23:54.171334 kernel: io scheduler mq-deadline registered Mar 7 01:23:54.171342 kernel: io scheduler kyber registered Mar 7 01:23:54.171349 kernel: io scheduler bfq registered Mar 7 01:23:54.171356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:23:54.171363 kernel: thunder_xcv, ver 1.0 Mar 7 01:23:54.171370 kernel: thunder_bgx, ver 1.0 Mar 7 01:23:54.171377 kernel: nicpf, ver 1.0 Mar 7 01:23:54.171384 kernel: nicvf, ver 1.0 Mar 7 01:23:54.171499 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 01:23:54.171572 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T01:23:53 UTC (1772846633) Mar 7 01:23:54.171582 kernel: efifb: probing for efifb Mar 7 01:23:54.171590 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:23:54.171597 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:23:54.171605 kernel: efifb: scrolling: redraw Mar 7 01:23:54.171612 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:23:54.171619 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:23:54.171626 kernel: fb0: EFI VGA frame buffer device Mar 7 01:23:54.171636 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 01:23:54.171643 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:23:54.171650 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 7 01:23:54.171658 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 01:23:54.171665 kernel: watchdog: Hard watchdog permanently disabled Mar 7 01:23:54.171672 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:23:54.171680 kernel: Segment Routing with IPv6 Mar 7 01:23:54.171687 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:23:54.171694 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:23:54.171702 kernel: Key type dns_resolver registered Mar 7 01:23:54.171710 kernel: registered taskstats version 1 Mar 7 01:23:54.171717 kernel: Loading compiled-in X.509 certificates Mar 7 01:23:54.171724 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 01:23:54.171732 kernel: Key type .fscrypt registered Mar 7 01:23:54.171739 kernel: Key type fscrypt-provisioning registered Mar 7 01:23:54.171746 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:23:54.171753 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:23:54.171760 kernel: ima: No architecture policies found Mar 7 01:23:54.171769 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 01:23:54.171776 kernel: clk: Disabling unused clocks Mar 7 01:23:54.171784 kernel: Freeing unused kernel memory: 39424K Mar 7 01:23:54.171791 kernel: Run /init as init process Mar 7 01:23:54.171798 kernel: with arguments: Mar 7 01:23:54.171805 kernel: /init Mar 7 01:23:54.171812 kernel: with environment: Mar 7 01:23:54.171819 kernel: HOME=/ Mar 7 01:23:54.171826 kernel: TERM=linux Mar 7 01:23:54.171836 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:23:54.171846 systemd[1]: Detected virtualization microsoft. Mar 7 01:23:54.171854 systemd[1]: Detected architecture arm64. Mar 7 01:23:54.171862 systemd[1]: Running in initrd. Mar 7 01:23:54.171869 systemd[1]: No hostname configured, using default hostname. Mar 7 01:23:54.171877 systemd[1]: Hostname set to . Mar 7 01:23:54.171885 systemd[1]: Initializing machine ID from random generator. Mar 7 01:23:54.171894 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:23:54.171902 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:23:54.171910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:23:54.171918 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:23:54.171926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:23:54.171934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:23:54.171942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:23:54.171951 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:23:54.171961 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:23:54.171969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:23:54.171977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:23:54.171984 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:23:54.171992 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:23:54.172000 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:23:54.172008 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:23:54.172016 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:23:54.172025 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:23:54.172033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:23:54.172041 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:23:54.172049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:23:54.172057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:23:54.172064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:23:54.172072 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:23:54.172080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:23:54.172089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:23:54.172097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:23:54.172105 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:23:54.172113 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:23:54.172133 systemd-journald[217]: Collecting audit messages is disabled. Mar 7 01:23:54.172153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:23:54.172162 systemd-journald[217]: Journal started Mar 7 01:23:54.172180 systemd-journald[217]: Runtime Journal (/run/log/journal/de3d367f58564545a4cf570ad517df36) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:23:54.186488 systemd-modules-load[218]: Inserted module 'overlay' Mar 7 01:23:54.191534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:54.205378 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:23:54.206845 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:23:54.225641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:23:54.235272 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:23:54.235296 kernel: Bridge firewalling registered Mar 7 01:23:54.234123 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 7 01:23:54.239602 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:23:54.250489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:23:54.255527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:54.272407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:54.279334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:23:54.299353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:23:54.326385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:23:54.333226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:54.345218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:23:54.360220 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:23:54.368875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:23:54.396403 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:23:54.406675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:23:54.431640 dracut-cmdline[252]: dracut-dracut-053 Mar 7 01:23:54.431640 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:23:54.412323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:23:54.435702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:23:54.492105 systemd-resolved[255]: Positive Trust Anchors: Mar 7 01:23:54.495477 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:23:54.495511 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:23:54.497801 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 7 01:23:54.498595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:23:54.503800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:23:54.561214 kernel: SCSI subsystem initialized Mar 7 01:23:54.568220 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:23:54.580216 kernel: iscsi: registered transport (tcp) Mar 7 01:23:54.592209 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:23:54.595218 kernel: QLogic iSCSI HBA Driver Mar 7 01:23:54.631970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:23:54.649367 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:23:54.678486 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:23:54.678558 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:23:54.683613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:23:54.734223 kernel: raid6: neonx8 gen() 15824 MB/s Mar 7 01:23:54.750207 kernel: raid6: neonx4 gen() 15704 MB/s Mar 7 01:23:54.769209 kernel: raid6: neonx2 gen() 13243 MB/s Mar 7 01:23:54.789215 kernel: raid6: neonx1 gen() 10483 MB/s Mar 7 01:23:54.808211 kernel: raid6: int64x8 gen() 6981 MB/s Mar 7 01:23:54.828213 kernel: raid6: int64x4 gen() 7353 MB/s Mar 7 01:23:54.848216 kernel: raid6: int64x2 gen() 6146 MB/s Mar 7 01:23:54.870016 kernel: raid6: int64x1 gen() 5071 MB/s Mar 7 01:23:54.870071 kernel: raid6: using algorithm neonx8 gen() 15824 MB/s Mar 7 01:23:54.893334 kernel: raid6: .... xor() 12048 MB/s, rmw enabled Mar 7 01:23:54.893392 kernel: raid6: using neon recovery algorithm Mar 7 01:23:54.903242 kernel: xor: measuring software checksum speed Mar 7 01:23:54.903274 kernel: 8regs : 19764 MB/sec Mar 7 01:23:54.906508 kernel: 32regs : 19660 MB/sec Mar 7 01:23:54.910443 kernel: arm64_neon : 27034 MB/sec Mar 7 01:23:54.913890 kernel: xor: using function: arm64_neon (27034 MB/sec) Mar 7 01:23:54.964228 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:23:54.974453 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:23:54.991397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:23:55.011779 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 7 01:23:55.016612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:23:55.032432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:23:55.053709 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Mar 7 01:23:55.081380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:23:55.093412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:23:55.129698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:23:55.148392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:23:55.169248 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:23:55.180802 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:23:55.192710 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:23:55.203019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:23:55.216393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:23:55.231672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:23:55.241351 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 01:23:55.235904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:55.246990 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:55.264996 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:23:55.265016 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:23:55.265026 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:23:55.257617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:23:55.257834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.314239 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:23:55.314269 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:23:55.314279 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:23:55.314289 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:23:55.314299 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:23:55.314308 kernel: scsi host1: storvsc_host_t Mar 7 01:23:55.283532 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.341966 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:23:55.342120 kernel: scsi host0: storvsc_host_t Mar 7 01:23:55.342239 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:23:55.342263 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:23:55.313867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.352244 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:23:55.359703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.371579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:23:55.371643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.377594 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.400465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:23:55.418593 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: VF slot 1 added Mar 7 01:23:55.418726 kernel: PTP clock support registered Mar 7 01:23:55.427823 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:23:55.427859 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:23:55.430216 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:23:55.430245 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:23:55.430256 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:23:55.619609 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 7 01:23:55.648505 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:23:55.648533 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:23:55.648701 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:23:55.652852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:23:55.666721 kernel: hv_pci f6f50c91-c1cd-4e95-b23d-8f40a76741ff: PCI VMBus probing: Using version 0x10004 Mar 7 01:23:55.666855 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:23:55.667314 kernel: hv_pci f6f50c91-c1cd-4e95-b23d-8f40a76741ff: PCI host bridge to bus c1cd:00 Mar 7 01:23:55.677386 kernel: pci_bus c1cd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 01:23:55.682499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:23:55.698568 kernel: pci_bus c1cd:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:23:55.712635 kernel: pci c1cd:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 7 01:23:55.712713 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:23:55.712880 kernel: pci c1cd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:23:55.712897 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:23:55.718569 kernel: pci c1cd:00:02.0: enabling Extended Tags Mar 7 01:23:55.718612 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:23:55.729147 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:23:55.729323 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:23:55.740431 kernel: pci c1cd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c1cd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 7 01:23:55.740476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:23:55.749238 kernel: pci_bus c1cd:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:23:55.749378 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:23:55.756618 kernel: pci c1cd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:23:55.756762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:23:55.778916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:23:55.803316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#126 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:23:55.824136 kernel: mlx5_core c1cd:00:02.0: enabling device (0000 -> 0002) Mar 7 01:23:55.829304 kernel: mlx5_core c1cd:00:02.0: firmware version: 16.30.5026 Mar 7 01:23:56.027605 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: VF registering: eth1 Mar 7 01:23:56.027797 kernel: mlx5_core c1cd:00:02.0 eth1: joined to eth0 Mar 7 01:23:56.035316 kernel: mlx5_core c1cd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 01:23:56.045309 kernel: mlx5_core c1cd:00:02.0 enP49613s1: renamed from eth1 Mar 7 01:23:56.278320 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Mar 7 01:23:56.293029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:23:56.304858 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:23:56.377315 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (500) Mar 7 01:23:56.390172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:23:56.395641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:23:56.422547 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:23:56.452101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:23:57.448306 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:23:57.448360 disk-uuid[606]: The operation has completed successfully. Mar 7 01:23:57.512936 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:23:57.517336 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:23:57.542407 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:23:57.552277 sh[695]: Success Mar 7 01:23:57.582323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 01:23:57.841307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:23:57.859413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:23:57.865826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:23:57.896340 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 01:23:57.896379 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:57.901896 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:23:57.906436 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:23:57.909854 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:23:58.178202 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:23:58.182493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:23:58.204504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:23:58.209416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:23:58.245850 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:58.245898 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:58.249307 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:23:58.288600 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:23:58.301365 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:23:58.305742 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:58.314163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:23:58.319629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:23:58.335591 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:23:58.346342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:23:58.381461 systemd-networkd[879]: lo: Link UP Mar 7 01:23:58.381468 systemd-networkd[879]: lo: Gained carrier Mar 7 01:23:58.382952 systemd-networkd[879]: Enumeration completed Mar 7 01:23:58.384250 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:23:58.387867 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:23:58.387870 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:23:58.392451 systemd[1]: Reached target network.target - Network. Mar 7 01:23:58.467317 kernel: mlx5_core c1cd:00:02.0 enP49613s1: Link up Mar 7 01:23:58.505308 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: Data path switched to VF: enP49613s1 Mar 7 01:23:58.505945 systemd-networkd[879]: enP49613s1: Link UP Mar 7 01:23:58.506031 systemd-networkd[879]: eth0: Link UP Mar 7 01:23:58.506123 systemd-networkd[879]: eth0: Gained carrier Mar 7 01:23:58.506131 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:23:58.515717 systemd-networkd[879]: enP49613s1: Gained carrier Mar 7 01:23:58.538345 systemd-networkd[879]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:23:59.216031 ignition[878]: Ignition 2.19.0 Mar 7 01:23:59.216044 ignition[878]: Stage: fetch-offline Mar 7 01:23:59.220072 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:23:59.216079 ignition[878]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.238412 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:23:59.216086 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.216170 ignition[878]: parsed url from cmdline: "" Mar 7 01:23:59.216173 ignition[878]: no config URL provided Mar 7 01:23:59.216177 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:23:59.216183 ignition[878]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:23:59.216187 ignition[878]: failed to fetch config: resource requires networking Mar 7 01:23:59.216592 ignition[878]: Ignition finished successfully Mar 7 01:23:59.256833 ignition[887]: Ignition 2.19.0 Mar 7 01:23:59.256840 ignition[887]: Stage: fetch Mar 7 01:23:59.257045 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.257055 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.257146 ignition[887]: parsed url from cmdline: "" Mar 7 01:23:59.257149 ignition[887]: no config URL provided Mar 7 01:23:59.257154 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:23:59.257163 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:23:59.257186 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:23:59.382994 ignition[887]: GET result: OK Mar 7 01:23:59.383053 ignition[887]: config has been read from IMDS userdata Mar 7 01:23:59.383097 ignition[887]: parsing config with SHA512: a5f4aa03ed659d1407a491584435858a745d1da261a47c47030f81121e55125e9186985db7e1b9407d533b278504f1f9fa04c8403962328ea9e4ef8b386c8bdf Mar 7 01:23:59.386770 unknown[887]: fetched base config from "system" Mar 7 01:23:59.387151 ignition[887]: fetch: fetch complete Mar 7 01:23:59.386777 unknown[887]: fetched base config from "system" Mar 7 01:23:59.387156 ignition[887]: fetch: fetch passed Mar 7 01:23:59.386782 unknown[887]: fetched user config from "azure" Mar 7 01:23:59.387193 ignition[887]: Ignition finished successfully Mar 7 01:23:59.390468 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:23:59.408419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:23:59.426438 ignition[893]: Ignition 2.19.0 Mar 7 01:23:59.430816 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:23:59.426444 ignition[893]: Stage: kargs Mar 7 01:23:59.426618 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.426627 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.427529 ignition[893]: kargs: kargs passed Mar 7 01:23:59.427572 ignition[893]: Ignition finished successfully Mar 7 01:23:59.452533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:23:59.469519 ignition[899]: Ignition 2.19.0 Mar 7 01:23:59.469527 ignition[899]: Stage: disks Mar 7 01:23:59.473027 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:23:59.469703 ignition[899]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:23:59.479965 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:23:59.469712 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:23:59.489163 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:23:59.471702 ignition[899]: disks: disks passed Mar 7 01:23:59.498159 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:23:59.471760 ignition[899]: Ignition finished successfully Mar 7 01:23:59.507326 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:23:59.516278 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:23:59.536571 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:23:59.615980 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:23:59.624486 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:23:59.639495 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:23:59.689634 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 01:23:59.689379 systemd-networkd[879]: eth0: Gained IPv6LL Mar 7 01:23:59.690576 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:23:59.697431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:23:59.738369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:23:59.756310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (919) Mar 7 01:23:59.767550 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:23:59.767593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:23:59.771460 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:23:59.778317 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:23:59.778435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:23:59.785471 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:23:59.793020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:23:59.793052 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:23:59.806316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:23:59.818880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:23:59.841538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:24:00.291176 coreos-metadata[936]: Mar 07 01:24:00.291 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:24:00.298134 coreos-metadata[936]: Mar 07 01:24:00.298 INFO Fetch successful Mar 7 01:24:00.298134 coreos-metadata[936]: Mar 07 01:24:00.298 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:24:00.312456 coreos-metadata[936]: Mar 07 01:24:00.312 INFO Fetch successful Mar 7 01:24:00.329318 coreos-metadata[936]: Mar 07 01:24:00.329 INFO wrote hostname ci-4081.3.6-n-0072e04abc to /sysroot/etc/hostname Mar 7 01:24:00.336569 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:24:00.635037 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:24:00.691517 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:24:00.715114 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:24:00.722197 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:24:01.998149 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:24:02.010485 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:24:02.022469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:24:02.038613 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:24:02.033270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:24:02.063572 ignition[1038]: INFO : Ignition 2.19.0 Mar 7 01:24:02.063572 ignition[1038]: INFO : Stage: mount Mar 7 01:24:02.071643 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:02.071643 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:02.071643 ignition[1038]: INFO : mount: mount passed Mar 7 01:24:02.071643 ignition[1038]: INFO : Ignition finished successfully Mar 7 01:24:02.068654 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:24:02.093394 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:24:02.104987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:24:02.114943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:24:02.138312 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1050) Mar 7 01:24:02.149510 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:24:02.149524 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:24:02.152968 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:24:02.160305 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:24:02.162324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:24:02.189564 ignition[1067]: INFO : Ignition 2.19.0 Mar 7 01:24:02.189564 ignition[1067]: INFO : Stage: files Mar 7 01:24:02.189564 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:02.189564 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:02.189564 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:24:02.210214 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:24:02.210214 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:24:02.296733 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:24:02.303350 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:24:02.303350 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:24:02.303119 unknown[1067]: wrote ssh authorized keys file for user: core Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:24:02.320813 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 01:24:02.370274 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:24:02.476318 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:24:02.476318 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:02.492537 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 01:24:03.086601 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:24:03.553160 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:24:03.553160 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:24:03.602230 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:24:03.611922 ignition[1067]: INFO : files: files passed Mar 7 01:24:03.611922 ignition[1067]: INFO : Ignition finished successfully Mar 7 01:24:03.619091 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:24:03.654570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:24:03.666459 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:24:03.735684 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.735684 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.689420 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:24:03.760629 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:24:03.689529 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:24:03.706859 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:24:03.712591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:24:03.738442 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:24:03.769498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:24:03.771332 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:24:03.778828 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:24:03.788421 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:24:03.797934 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:24:03.814546 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:24:03.844707 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:24:03.861585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:24:03.877787 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:24:03.888041 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:24:03.898367 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:24:03.906495 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:24:03.906664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:24:03.921798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:24:03.930642 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:24:03.938432 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:24:03.947940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:24:03.958117 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:24:03.968108 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:24:03.972726 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:24:03.982590 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:24:03.992159 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:24:04.001621 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:24:04.009432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:24:04.009605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:24:04.022221 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:24:04.031545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:24:04.041272 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:24:04.041385 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:24:04.052242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:24:04.052424 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:24:04.069186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:24:04.069369 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:24:04.078875 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:24:04.079020 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:24:04.087592 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:24:04.087737 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:24:04.114414 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:24:04.140606 ignition[1119]: INFO : Ignition 2.19.0 Mar 7 01:24:04.140606 ignition[1119]: INFO : Stage: umount Mar 7 01:24:04.140606 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:24:04.140606 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:24:04.140606 ignition[1119]: INFO : umount: umount passed Mar 7 01:24:04.140606 ignition[1119]: INFO : Ignition finished successfully Mar 7 01:24:04.122429 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:24:04.122632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:24:04.145481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:24:04.156715 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:24:04.156857 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:24:04.166479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:24:04.166581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:24:04.187787 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:24:04.188386 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:24:04.188472 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:24:04.193244 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:24:04.193347 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:24:04.203383 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:24:04.203421 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:24:04.212068 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:24:04.212101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:24:04.221473 systemd[1]: Stopped target network.target - Network. Mar 7 01:24:04.229056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:24:04.229098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:24:04.238399 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:24:04.247323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:24:04.256325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:24:04.261974 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:24:04.269603 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:24:04.277448 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:24:04.277498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:24:04.285717 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:24:04.285768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:24:04.293827 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:24:04.293865 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:24:04.302073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:24:04.302103 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:24:04.310840 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:24:04.319160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:24:04.328108 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:24:04.328192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:24:04.337353 systemd-networkd[879]: eth0: DHCPv6 lease lost Mar 7 01:24:04.342357 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:24:04.342472 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:24:04.352766 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:24:04.352855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:24:04.532316 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: Data path switched from VF: enP49613s1 Mar 7 01:24:04.363619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:24:04.363672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:24:04.392475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:24:04.400705 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:24:04.400774 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:24:04.410081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:24:04.410122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:24:04.418673 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:24:04.418707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:24:04.427235 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:24:04.427271 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:24:04.436828 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:24:04.479958 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:24:04.480166 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:24:04.490207 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:24:04.490252 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:24:04.497825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:24:04.497850 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:24:04.506680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:24:04.506720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:24:04.527242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:24:04.527294 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:24:04.540872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:24:04.540923 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:24:04.570512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:24:04.581368 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:24:04.581437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:24:04.591984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:24:04.592028 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:24:04.602127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:24:04.602330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:24:04.633153 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:24:04.633280 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:24:06.649931 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:24:06.650036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:24:06.654585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:24:06.662805 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:24:06.662858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:24:06.685477 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:24:06.757927 systemd[1]: Switching root. Mar 7 01:24:06.786469 systemd-journald[217]: Journal stopped Mar 7 01:24:11.657775 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 7 01:24:11.657799 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:24:11.657809 kernel: SELinux: policy capability open_perms=1 Mar 7 01:24:11.657819 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:24:11.657827 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:24:11.657834 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:24:11.657843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:24:11.657851 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:24:11.657859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:24:11.657867 kernel: audit: type=1403 audit(1772846648.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:24:11.657877 systemd[1]: Successfully loaded SELinux policy in 124.255ms. Mar 7 01:24:11.657887 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.852ms. Mar 7 01:24:11.657897 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:24:11.657906 systemd[1]: Detected virtualization microsoft. Mar 7 01:24:11.657915 systemd[1]: Detected architecture arm64. Mar 7 01:24:11.657926 systemd[1]: Detected first boot. Mar 7 01:24:11.657935 systemd[1]: Hostname set to . Mar 7 01:24:11.657944 systemd[1]: Initializing machine ID from random generator. Mar 7 01:24:11.657953 zram_generator::config[1179]: No configuration found. Mar 7 01:24:11.657962 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:24:11.657972 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:24:11.657982 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:24:11.657993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:24:11.658002 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:24:11.658011 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:24:11.658020 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:24:11.658029 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:24:11.658038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:24:11.658049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:24:11.658058 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:24:11.658067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:24:11.658077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:24:11.658086 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:24:11.658095 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:24:11.658104 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:24:11.658113 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:24:11.658122 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 01:24:11.658133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:24:11.658142 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:24:11.658151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:24:11.658162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:24:11.658172 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:24:11.658181 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:24:11.658191 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:24:11.658202 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:24:11.658212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:24:11.658221 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:24:11.658231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:24:11.658240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:24:11.658249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:24:11.658259 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:24:11.658270 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:24:11.658279 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:24:11.658289 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:24:11.658305 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:24:11.658315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:24:11.658324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:24:11.658335 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:24:11.658345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:24:11.658355 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:24:11.658364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:24:11.658374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:24:11.658383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:24:11.658393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:24:11.658403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:24:11.658413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:24:11.658424 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:24:11.658433 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:24:11.658443 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:24:11.658453 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:24:11.658462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:24:11.658471 kernel: fuse: init (API version 7.39) Mar 7 01:24:11.658480 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:24:11.658489 kernel: loop: module loaded Mar 7 01:24:11.658499 kernel: ACPI: bus type drm_connector registered Mar 7 01:24:11.658521 systemd-journald[1287]: Collecting audit messages is disabled. Mar 7 01:24:11.658540 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:24:11.658550 systemd-journald[1287]: Journal started Mar 7 01:24:11.658571 systemd-journald[1287]: Runtime Journal (/run/log/journal/fd9ed74e847c487a8eba02d6e959ad6c) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:24:11.677852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:24:11.689674 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:24:11.690963 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:24:11.696078 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:24:11.701110 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:24:11.705658 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:24:11.710563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:24:11.715618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:24:11.720166 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:24:11.725853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:24:11.731802 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:24:11.731948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:24:11.737756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:24:11.737898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:24:11.743214 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:24:11.743376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:24:11.748568 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:24:11.748711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:24:11.754428 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:24:11.754567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:24:11.759715 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:24:11.759892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:24:11.765233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:24:11.770724 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:24:11.776660 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:24:11.782602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:24:11.798167 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:24:11.809410 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:24:11.815515 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:24:11.820635 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:24:11.870473 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:24:11.876511 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:24:11.881696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:24:11.882735 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:24:11.887704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:24:11.888814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:24:11.904667 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:24:11.915452 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:24:11.923071 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:24:11.931880 systemd-journald[1287]: Time spent on flushing to /var/log/journal/fd9ed74e847c487a8eba02d6e959ad6c is 13.295ms for 881 entries. Mar 7 01:24:11.931880 systemd-journald[1287]: System Journal (/var/log/journal/fd9ed74e847c487a8eba02d6e959ad6c) is 8.0M, max 2.6G, 2.6G free. Mar 7 01:24:11.974451 systemd-journald[1287]: Received client request to flush runtime journal. Mar 7 01:24:11.936810 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:24:11.945023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:24:11.953583 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:24:11.959219 udevadm[1342]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:24:11.978761 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:24:12.003683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:24:12.337074 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 7 01:24:12.337088 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 7 01:24:12.342701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:24:12.358443 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:24:12.483739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:24:12.493401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:24:12.505870 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Mar 7 01:24:12.506157 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Mar 7 01:24:12.511601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:24:13.185044 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:24:13.197468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:24:13.220717 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Mar 7 01:24:13.366990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:24:13.385463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:24:13.415150 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 7 01:24:13.449480 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:24:13.506314 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:24:13.535845 kernel: hv_vmbus: registering driver hv_balloon Mar 7 01:24:13.535935 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 01:24:13.542898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#82 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:24:13.543148 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 7 01:24:13.566306 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 01:24:13.566400 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 01:24:13.566418 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 01:24:13.552009 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:24:13.569766 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:24:13.576569 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:24:13.585608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:24:13.598929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:24:13.599904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:24:13.617668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:24:13.627018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:24:13.627232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:24:13.655482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:24:13.693127 systemd-networkd[1380]: lo: Link UP Mar 7 01:24:13.693454 systemd-networkd[1380]: lo: Gained carrier Mar 7 01:24:13.695284 systemd-networkd[1380]: Enumeration completed Mar 7 01:24:13.695770 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:24:13.695836 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:24:13.702789 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:24:13.708315 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1381) Mar 7 01:24:13.721448 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:24:13.766351 kernel: mlx5_core c1cd:00:02.0 enP49613s1: Link up Mar 7 01:24:13.788813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:24:13.791310 kernel: hv_netvsc 7ced8d85-fd63-7ced-8d85-fd637ced8d85 eth0: Data path switched to VF: enP49613s1 Mar 7 01:24:13.796046 systemd-networkd[1380]: enP49613s1: Link UP Mar 7 01:24:13.796445 systemd-networkd[1380]: eth0: Link UP Mar 7 01:24:13.796449 systemd-networkd[1380]: eth0: Gained carrier Mar 7 01:24:13.796464 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:24:13.798577 systemd-networkd[1380]: enP49613s1: Gained carrier Mar 7 01:24:13.806330 systemd-networkd[1380]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:24:13.866285 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:24:13.879436 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:24:13.945308 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:24:13.970877 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:24:13.977162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:24:13.986542 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:24:13.995326 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:24:14.017645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:24:14.023977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:24:14.029820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:24:14.029945 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:24:14.034936 systemd[1]: Reached target machines.target - Containers. Mar 7 01:24:14.040920 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:24:14.052473 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:24:14.058495 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:24:14.063441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:24:14.066456 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:24:14.075142 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:24:14.083490 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:24:14.091695 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:24:14.662320 kernel: loop0: detected capacity change from 0 to 209336 Mar 7 01:24:14.670099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:24:15.310610 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:24:15.517402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:24:15.561328 kernel: loop1: detected capacity change from 0 to 31320 Mar 7 01:24:15.747408 systemd-networkd[1380]: eth0: Gained IPv6LL Mar 7 01:24:15.754420 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:24:16.958553 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:24:16.960684 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:24:17.493318 kernel: loop2: detected capacity change from 0 to 114432 Mar 7 01:24:18.284356 kernel: loop3: detected capacity change from 0 to 114328 Mar 7 01:24:19.346316 kernel: loop4: detected capacity change from 0 to 209336 Mar 7 01:24:19.364508 kernel: loop5: detected capacity change from 0 to 31320 Mar 7 01:24:19.464324 kernel: loop6: detected capacity change from 0 to 114432 Mar 7 01:24:19.510326 kernel: loop7: detected capacity change from 0 to 114328 Mar 7 01:24:19.753847 (sd-merge)[1489]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 01:24:19.754258 (sd-merge)[1489]: Merged extensions into '/usr'. Mar 7 01:24:19.758437 systemd[1]: Reloading requested from client PID 1470 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:24:19.758462 systemd[1]: Reloading... Mar 7 01:24:19.814408 zram_generator::config[1516]: No configuration found. Mar 7 01:24:20.111767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:24:20.197853 systemd[1]: Reloading finished in 438 ms. Mar 7 01:24:20.216958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:24:20.231437 systemd[1]: Starting ensure-sysext.service... Mar 7 01:24:20.244685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:24:20.253276 systemd[1]: Reloading requested from client PID 1577 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:24:20.253289 systemd[1]: Reloading... Mar 7 01:24:20.266766 systemd-tmpfiles[1578]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:24:20.267737 systemd-tmpfiles[1578]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:24:20.269968 systemd-tmpfiles[1578]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:24:20.270600 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Mar 7 01:24:20.270881 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Mar 7 01:24:20.316330 zram_generator::config[1610]: No configuration found. Mar 7 01:24:20.402103 systemd-tmpfiles[1578]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:24:20.402115 systemd-tmpfiles[1578]: Skipping /boot Mar 7 01:24:20.409167 systemd-tmpfiles[1578]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:24:20.409291 systemd-tmpfiles[1578]: Skipping /boot Mar 7 01:24:20.419786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:24:20.500992 systemd[1]: Reloading finished in 247 ms. Mar 7 01:24:20.517247 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:24:20.546514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:24:20.853419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:24:20.861448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:24:20.871560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:24:20.889465 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:24:20.899690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:24:20.902282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:24:20.922529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:24:20.937497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:24:20.943192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:24:20.944090 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:24:20.950001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:24:20.950144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:24:20.955929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:24:20.956068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:24:20.961626 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:24:20.961801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:24:20.974784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:24:20.979497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:24:20.986578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:24:20.994523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:24:20.998971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:24:20.999774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:24:20.999914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:24:21.005115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:24:21.005248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:24:21.010979 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:24:21.011153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:24:21.018797 systemd-resolved[1681]: Positive Trust Anchors: Mar 7 01:24:21.018811 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:24:21.018849 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:24:21.020845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:24:21.032493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:24:21.039526 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:24:21.049952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:24:21.058545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:24:21.063794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:24:21.064153 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:24:21.070294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:24:21.076253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:24:21.076507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:24:21.082291 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:24:21.082570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:24:21.087723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:24:21.087860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:24:21.094103 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:24:21.094398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:24:21.101575 systemd[1]: Finished ensure-sysext.service. Mar 7 01:24:21.110894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:24:21.111076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:24:21.204720 systemd-resolved[1681]: Using system hostname 'ci-4081.3.6-n-0072e04abc'. Mar 7 01:24:21.206263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:24:21.211396 systemd[1]: Reached target network.target - Network. Mar 7 01:24:21.215449 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:24:21.215534 augenrules[1731]: No rules Mar 7 01:24:21.220591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:24:21.226038 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:24:23.162018 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:24:23.168096 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:24:27.029248 ldconfig[1466]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:24:27.038698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:24:27.052475 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:24:27.065616 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:24:27.071136 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:24:27.075737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:24:27.080910 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:24:27.086980 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:24:27.092024 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:24:27.097802 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:24:27.103911 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:24:27.103943 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:24:27.108185 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:24:27.113022 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:24:27.119898 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:24:27.125617 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:24:27.130723 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:24:27.135799 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:24:27.140288 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:24:27.144874 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:24:27.144912 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:24:27.144929 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:24:27.152373 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 01:24:27.157468 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:24:27.165486 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:24:27.175466 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:24:27.191145 (chronyd)[1747]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 7 01:24:27.192377 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:24:27.200506 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:24:27.204889 jq[1754]: false Mar 7 01:24:27.205335 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:24:27.205375 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 7 01:24:27.208446 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 01:24:27.212981 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 01:24:27.215390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:24:27.220156 KVP[1757]: KVP starting; pid is:1757 Mar 7 01:24:27.222159 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:24:27.236332 KVP[1757]: KVP LIC Version: 3.1 Mar 7 01:24:27.237333 kernel: hv_utils: KVP IC version 4.0 Mar 7 01:24:27.239484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:24:27.244999 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:24:27.250169 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:24:27.261531 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:24:27.264704 chronyd[1771]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 7 01:24:27.271489 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:24:27.280785 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:24:27.282454 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:24:27.287403 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:24:27.307623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:24:27.307847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:24:27.318991 extend-filesystems[1755]: Found loop4 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found loop5 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found loop6 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found loop7 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda1 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda2 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda3 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found usr Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda4 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda6 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda7 Mar 7 01:24:27.318991 extend-filesystems[1755]: Found sda9 Mar 7 01:24:27.318991 extend-filesystems[1755]: Checking size of /dev/sda9 Mar 7 01:24:27.391391 jq[1776]: true Mar 7 01:24:27.324047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:24:27.325178 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:24:27.372066 (ntainerd)[1792]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:24:27.382818 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:24:27.383050 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:24:27.392192 jq[1783]: true Mar 7 01:24:27.516446 extend-filesystems[1755]: Old size kept for /dev/sda9 Mar 7 01:24:27.522956 extend-filesystems[1755]: Found sr0 Mar 7 01:24:27.523622 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:24:27.523875 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:24:27.536236 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:24:27.553262 systemd-logind[1773]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:24:27.555407 systemd-logind[1773]: New seat seat0. Mar 7 01:24:27.556567 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:24:27.604167 chronyd[1771]: Timezone right/UTC failed leap second check, ignoring Mar 7 01:24:27.604454 chronyd[1771]: Loaded seccomp filter (level 2) Mar 7 01:24:27.605420 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 01:24:27.623674 tar[1779]: linux-arm64/LICENSE Mar 7 01:24:27.624008 tar[1779]: linux-arm64/helm Mar 7 01:24:27.671331 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1836) Mar 7 01:24:27.953388 update_engine[1775]: I20260307 01:24:27.944475 1775 main.cc:92] Flatcar Update Engine starting Mar 7 01:24:28.077634 sshd_keygen[1798]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:24:28.099715 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:24:28.109431 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:24:28.141992 update_engine[1775]: I20260307 01:24:28.123003 1775 update_check_scheduler.cc:74] Next update check in 11m3s Mar 7 01:24:28.115520 dbus-daemon[1753]: [system] SELinux support is enabled Mar 7 01:24:28.117410 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 01:24:28.123222 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:24:28.136778 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:24:28.137002 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:24:28.147565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:24:28.147612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:24:28.150738 dbus-daemon[1753]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:24:28.163070 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:24:28.170879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:24:28.170913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:24:28.180736 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:24:28.188772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:24:28.191659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:24:28.208501 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 01:24:28.213681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:24:28.228602 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:24:28.234666 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 01:24:28.240417 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:24:28.307932 coreos-metadata[1749]: Mar 07 01:24:28.307 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:24:28.311379 coreos-metadata[1749]: Mar 07 01:24:28.310 INFO Fetch successful Mar 7 01:24:28.312046 coreos-metadata[1749]: Mar 07 01:24:28.311 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 01:24:28.315517 coreos-metadata[1749]: Mar 07 01:24:28.315 INFO Fetch successful Mar 7 01:24:28.316128 coreos-metadata[1749]: Mar 07 01:24:28.316 INFO Fetching http://168.63.129.16/machine/6854bf40-84de-4a74-a1ee-840b04f75b7f/f06a1154%2D5bb8%2D42bd%2Da1c0%2D8126f5dd19b5.%5Fci%2D4081.3.6%2Dn%2D0072e04abc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 01:24:28.318167 coreos-metadata[1749]: Mar 07 01:24:28.318 INFO Fetch successful Mar 7 01:24:28.318419 coreos-metadata[1749]: Mar 07 01:24:28.318 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:24:28.330325 coreos-metadata[1749]: Mar 07 01:24:28.330 INFO Fetch successful Mar 7 01:24:28.345414 bash[1816]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:24:28.348078 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:24:28.364539 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:24:28.370325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:24:28.378610 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:24:28.506752 locksmithd[1889]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:24:28.574214 tar[1779]: linux-arm64/README.md Mar 7 01:24:28.588643 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:24:28.642482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:24:28.648468 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:24:28.885292 containerd[1792]: time="2026-03-07T01:24:28.885216980Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:24:28.911853 containerd[1792]: time="2026-03-07T01:24:28.911806420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.913352 containerd[1792]: time="2026-03-07T01:24:28.913315100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:24:28.913459 containerd[1792]: time="2026-03-07T01:24:28.913445500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:24:28.913511 containerd[1792]: time="2026-03-07T01:24:28.913500660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.913702020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.913723140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.913781860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.913800180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.913991820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914006060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914018540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914028140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914091260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914280220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914666 containerd[1792]: time="2026-03-07T01:24:28.914436940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:24:28.914929 containerd[1792]: time="2026-03-07T01:24:28.914453420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:24:28.914929 containerd[1792]: time="2026-03-07T01:24:28.914526060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:24:28.914929 containerd[1792]: time="2026-03-07T01:24:28.914561220Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:24:28.925558 containerd[1792]: time="2026-03-07T01:24:28.925523780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.925574980Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.925593260Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.925608820Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.925624820Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.925773700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926079740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926176660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926191580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926204860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926222860Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926237260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926255180Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926270180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926412 containerd[1792]: time="2026-03-07T01:24:28.926285100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926314580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926329660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926341180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926361060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926375220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926386820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926400460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926419020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926431740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926443860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926456100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926469340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926483820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926696 containerd[1792]: time="2026-03-07T01:24:28.926494740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926506860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926519260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926534740Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926561100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926573100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926583900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926652340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926670300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926681500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926693100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926706740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926718740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926728260Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:24:28.926945 containerd[1792]: time="2026-03-07T01:24:28.926738260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:24:28.927173 containerd[1792]: time="2026-03-07T01:24:28.926992900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:24:28.927173 containerd[1792]: time="2026-03-07T01:24:28.927047900Z" level=info msg="Connect containerd service" Mar 7 01:24:28.927173 containerd[1792]: time="2026-03-07T01:24:28.927079860Z" level=info msg="using legacy CRI server" Mar 7 01:24:28.927173 containerd[1792]: time="2026-03-07T01:24:28.927087140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:24:28.927349 containerd[1792]: time="2026-03-07T01:24:28.927189620Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:24:28.928559 containerd[1792]: time="2026-03-07T01:24:28.928160980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929066900Z" level=info msg="Start subscribing containerd event" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929112340Z" level=info msg="Start recovering state" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929197180Z" level=info msg="Start event monitor" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929208380Z" level=info msg="Start snapshots syncer" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929218660Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:24:28.929515 containerd[1792]: time="2026-03-07T01:24:28.929230620Z" level=info msg="Start streaming server" Mar 7 01:24:28.932637 containerd[1792]: time="2026-03-07T01:24:28.930766140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:24:28.932637 containerd[1792]: time="2026-03-07T01:24:28.930821820Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:24:28.930973 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:24:28.936119 containerd[1792]: time="2026-03-07T01:24:28.936094460Z" level=info msg="containerd successfully booted in 0.051677s" Mar 7 01:24:28.938948 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:24:28.947568 systemd[1]: Startup finished in 15.058s (kernel) + 20.573s (userspace) = 35.631s. Mar 7 01:24:29.109028 kubelet[1931]: E0307 01:24:29.108981 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:24:29.113477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:24:29.113636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:24:29.259392 login[1896]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 7 01:24:29.260987 login[1897]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:29.268113 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:24:29.273498 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:24:29.275392 systemd-logind[1773]: New session 1 of user core. Mar 7 01:24:29.309497 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:24:29.321533 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:24:29.341609 (systemd)[1952]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:24:29.503411 systemd[1952]: Queued start job for default target default.target. Mar 7 01:24:29.504368 systemd[1952]: Created slice app.slice - User Application Slice. Mar 7 01:24:29.504496 systemd[1952]: Reached target paths.target - Paths. Mar 7 01:24:29.504568 systemd[1952]: Reached target timers.target - Timers. Mar 7 01:24:29.514471 systemd[1952]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:24:29.520294 systemd[1952]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:24:29.520361 systemd[1952]: Reached target sockets.target - Sockets. Mar 7 01:24:29.520373 systemd[1952]: Reached target basic.target - Basic System. Mar 7 01:24:29.520410 systemd[1952]: Reached target default.target - Main User Target. Mar 7 01:24:29.520434 systemd[1952]: Startup finished in 173ms. Mar 7 01:24:29.520523 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:24:29.525536 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:24:29.881438 waagent[1892]: 2026-03-07T01:24:29.881355Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 7 01:24:29.889308 waagent[1892]: 2026-03-07T01:24:29.886059Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 7 01:24:29.889597 waagent[1892]: 2026-03-07T01:24:29.889552Z INFO Daemon Daemon Python: 3.11.9 Mar 7 01:24:29.894307 waagent[1892]: 2026-03-07T01:24:29.892998Z INFO Daemon Daemon Run daemon Mar 7 01:24:29.897500 waagent[1892]: 2026-03-07T01:24:29.897459Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 7 01:24:29.904226 waagent[1892]: 2026-03-07T01:24:29.904179Z INFO Daemon Daemon Using waagent for provisioning Mar 7 01:24:29.908291 waagent[1892]: 2026-03-07T01:24:29.908253Z INFO Daemon Daemon Activate resource disk Mar 7 01:24:29.911868 waagent[1892]: 2026-03-07T01:24:29.911829Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 01:24:29.921140 waagent[1892]: 2026-03-07T01:24:29.921098Z INFO Daemon Daemon Found device: None Mar 7 01:24:29.924511 waagent[1892]: 2026-03-07T01:24:29.924471Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 01:24:29.931216 waagent[1892]: 2026-03-07T01:24:29.931178Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 01:24:29.941020 waagent[1892]: 2026-03-07T01:24:29.940976Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:24:29.945575 waagent[1892]: 2026-03-07T01:24:29.945536Z INFO Daemon Daemon Running default provisioning handler Mar 7 01:24:29.956678 waagent[1892]: 2026-03-07T01:24:29.956630Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 01:24:29.967060 waagent[1892]: 2026-03-07T01:24:29.967010Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 01:24:29.974312 waagent[1892]: 2026-03-07T01:24:29.974258Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 01:24:29.978186 waagent[1892]: 2026-03-07T01:24:29.978143Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 01:24:30.098574 waagent[1892]: 2026-03-07T01:24:30.098491Z INFO Daemon Daemon Successfully mounted dvd Mar 7 01:24:30.112379 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 01:24:30.116328 waagent[1892]: 2026-03-07T01:24:30.114450Z INFO Daemon Daemon Detect protocol endpoint Mar 7 01:24:30.118716 waagent[1892]: 2026-03-07T01:24:30.118672Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:24:30.123462 waagent[1892]: 2026-03-07T01:24:30.123421Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 01:24:30.128801 waagent[1892]: 2026-03-07T01:24:30.128767Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 01:24:30.133162 waagent[1892]: 2026-03-07T01:24:30.133094Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 01:24:30.137054 waagent[1892]: 2026-03-07T01:24:30.137022Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 01:24:30.170697 waagent[1892]: 2026-03-07T01:24:30.170657Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 01:24:30.176038 waagent[1892]: 2026-03-07T01:24:30.176014Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 01:24:30.180534 waagent[1892]: 2026-03-07T01:24:30.180494Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 01:24:30.259849 login[1896]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:30.270338 systemd-logind[1773]: New session 2 of user core. Mar 7 01:24:30.271532 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:24:30.575908 waagent[1892]: 2026-03-07T01:24:30.575774Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 01:24:30.581176 waagent[1892]: 2026-03-07T01:24:30.581112Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 01:24:30.589077 waagent[1892]: 2026-03-07T01:24:30.589030Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:24:30.659414 waagent[1892]: 2026-03-07T01:24:30.659367Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 01:24:30.664737 waagent[1892]: 2026-03-07T01:24:30.664692Z INFO Daemon Mar 7 01:24:30.667354 waagent[1892]: 2026-03-07T01:24:30.667311Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f31a69ef-dca4-4d4c-92ce-968acf261694 eTag: 14565612804054867215 source: Fabric] Mar 7 01:24:30.677657 waagent[1892]: 2026-03-07T01:24:30.677611Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 01:24:30.683888 waagent[1892]: 2026-03-07T01:24:30.683841Z INFO Daemon Mar 7 01:24:30.686732 waagent[1892]: 2026-03-07T01:24:30.686691Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:24:30.696830 waagent[1892]: 2026-03-07T01:24:30.696798Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 01:24:30.855362 waagent[1892]: 2026-03-07T01:24:30.854563Z INFO Daemon Downloaded certificate {'thumbprint': 'B057A9A874649BE06F19A2F9A64B837C88F59510', 'hasPrivateKey': True} Mar 7 01:24:30.863260 waagent[1892]: 2026-03-07T01:24:30.863212Z INFO Daemon Fetch goal state completed Mar 7 01:24:30.909599 waagent[1892]: 2026-03-07T01:24:30.909544Z INFO Daemon Daemon Starting provisioning Mar 7 01:24:30.913854 waagent[1892]: 2026-03-07T01:24:30.913806Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 01:24:30.917951 waagent[1892]: 2026-03-07T01:24:30.917918Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-0072e04abc] Mar 7 01:24:30.939572 waagent[1892]: 2026-03-07T01:24:30.939512Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-0072e04abc] Mar 7 01:24:30.944918 waagent[1892]: 2026-03-07T01:24:30.944871Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 01:24:30.950176 waagent[1892]: 2026-03-07T01:24:30.950137Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 01:24:30.977314 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:24:30.977321 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:24:30.977361 systemd-networkd[1380]: eth0: DHCP lease lost Mar 7 01:24:30.978405 waagent[1892]: 2026-03-07T01:24:30.978340Z INFO Daemon Daemon Create user account if not exists Mar 7 01:24:30.983208 waagent[1892]: 2026-03-07T01:24:30.983160Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 01:24:30.983497 systemd-networkd[1380]: eth0: DHCPv6 lease lost Mar 7 01:24:30.988153 waagent[1892]: 2026-03-07T01:24:30.988099Z INFO Daemon Daemon Configure sudoer Mar 7 01:24:30.992169 waagent[1892]: 2026-03-07T01:24:30.992035Z INFO Daemon Daemon Configure sshd Mar 7 01:24:30.995973 waagent[1892]: 2026-03-07T01:24:30.995928Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 01:24:31.006689 waagent[1892]: 2026-03-07T01:24:31.006594Z INFO Daemon Daemon Deploy ssh public key. Mar 7 01:24:31.015388 systemd-networkd[1380]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:24:32.106604 waagent[1892]: 2026-03-07T01:24:32.106563Z INFO Daemon Daemon Provisioning complete Mar 7 01:24:32.122657 waagent[1892]: 2026-03-07T01:24:32.122615Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 01:24:32.127967 waagent[1892]: 2026-03-07T01:24:32.127924Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 01:24:32.135912 waagent[1892]: 2026-03-07T01:24:32.135875Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 7 01:24:32.260283 waagent[2004]: 2026-03-07T01:24:32.260213Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 7 01:24:32.261221 waagent[2004]: 2026-03-07T01:24:32.260749Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 7 01:24:32.261221 waagent[2004]: 2026-03-07T01:24:32.260820Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 7 01:24:32.299330 waagent[2004]: 2026-03-07T01:24:32.298597Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 7 01:24:32.299330 waagent[2004]: 2026-03-07T01:24:32.298824Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:24:32.299330 waagent[2004]: 2026-03-07T01:24:32.298885Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:24:32.306886 waagent[2004]: 2026-03-07T01:24:32.306827Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:24:32.432690 waagent[2004]: 2026-03-07T01:24:32.432608Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 01:24:32.433175 waagent[2004]: 2026-03-07T01:24:32.433132Z INFO ExtHandler Mar 7 01:24:32.433239 waagent[2004]: 2026-03-07T01:24:32.433214Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 49a20900-05ed-439c-ad8c-4b08de716940 eTag: 14565612804054867215 source: Fabric] Mar 7 01:24:32.433540 waagent[2004]: 2026-03-07T01:24:32.433503Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 01:24:32.434103 waagent[2004]: 2026-03-07T01:24:32.434061Z INFO ExtHandler Mar 7 01:24:32.434162 waagent[2004]: 2026-03-07T01:24:32.434137Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:24:32.438188 waagent[2004]: 2026-03-07T01:24:32.438157Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 01:24:32.503629 waagent[2004]: 2026-03-07T01:24:32.503547Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B057A9A874649BE06F19A2F9A64B837C88F59510', 'hasPrivateKey': True} Mar 7 01:24:32.504122 waagent[2004]: 2026-03-07T01:24:32.504080Z INFO ExtHandler Fetch goal state completed Mar 7 01:24:32.518841 waagent[2004]: 2026-03-07T01:24:32.518793Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2004 Mar 7 01:24:32.518980 waagent[2004]: 2026-03-07T01:24:32.518949Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 01:24:32.520541 waagent[2004]: 2026-03-07T01:24:32.520499Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 01:24:32.520889 waagent[2004]: 2026-03-07T01:24:32.520856Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 01:24:32.567243 waagent[2004]: 2026-03-07T01:24:32.567202Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 01:24:32.567452 waagent[2004]: 2026-03-07T01:24:32.567414Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 01:24:32.573447 waagent[2004]: 2026-03-07T01:24:32.573414Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 01:24:32.579763 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit waagent.service)... Mar 7 01:24:32.580019 systemd[1]: Reloading... Mar 7 01:24:32.667380 zram_generator::config[2060]: No configuration found. Mar 7 01:24:32.756846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:24:32.843240 systemd[1]: Reloading finished in 262 ms. Mar 7 01:24:32.864139 waagent[2004]: 2026-03-07T01:24:32.864057Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 7 01:24:32.871027 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit waagent.service)... Mar 7 01:24:32.871161 systemd[1]: Reloading... Mar 7 01:24:32.933445 zram_generator::config[2144]: No configuration found. Mar 7 01:24:33.049694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:24:33.131707 systemd[1]: Reloading finished in 260 ms. Mar 7 01:24:33.153364 waagent[2004]: 2026-03-07T01:24:33.153097Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 01:24:33.153364 waagent[2004]: 2026-03-07T01:24:33.153257Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 01:24:33.586951 waagent[2004]: 2026-03-07T01:24:33.586867Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 01:24:33.587514 waagent[2004]: 2026-03-07T01:24:33.587468Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 7 01:24:33.588246 waagent[2004]: 2026-03-07T01:24:33.588173Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 01:24:33.588648 waagent[2004]: 2026-03-07T01:24:33.588569Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 01:24:33.589584 waagent[2004]: 2026-03-07T01:24:33.588865Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:24:33.589584 waagent[2004]: 2026-03-07T01:24:33.588947Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:24:33.589584 waagent[2004]: 2026-03-07T01:24:33.589138Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 01:24:33.589584 waagent[2004]: 2026-03-07T01:24:33.589314Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 01:24:33.589584 waagent[2004]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 01:24:33.589584 waagent[2004]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 01:24:33.589584 waagent[2004]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 01:24:33.589584 waagent[2004]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:24:33.589584 waagent[2004]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:24:33.589584 waagent[2004]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:24:33.589952 waagent[2004]: 2026-03-07T01:24:33.589901Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:24:33.590116 waagent[2004]: 2026-03-07T01:24:33.590073Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 01:24:33.590171 waagent[2004]: 2026-03-07T01:24:33.590125Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 01:24:33.590497 waagent[2004]: 2026-03-07T01:24:33.590442Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 01:24:33.590613 waagent[2004]: 2026-03-07T01:24:33.590571Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 01:24:33.590960 waagent[2004]: 2026-03-07T01:24:33.590913Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 01:24:33.591377 waagent[2004]: 2026-03-07T01:24:33.591293Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:24:33.592166 waagent[2004]: 2026-03-07T01:24:33.592130Z INFO EnvHandler ExtHandler Configure routes Mar 7 01:24:33.592391 waagent[2004]: 2026-03-07T01:24:33.592361Z INFO EnvHandler ExtHandler Gateway:None Mar 7 01:24:33.593525 waagent[2004]: 2026-03-07T01:24:33.593461Z INFO EnvHandler ExtHandler Routes:None Mar 7 01:24:33.596823 waagent[2004]: 2026-03-07T01:24:33.596780Z INFO ExtHandler ExtHandler Mar 7 01:24:33.597244 waagent[2004]: 2026-03-07T01:24:33.597113Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: be6535a6-0e53-4a73-9f7c-e95d5d5c4647 correlation cb8119f3-0e19-4d3e-b1b3-4b79ef2f5c1f created: 2026-03-07T01:23:23.302811Z] Mar 7 01:24:33.598000 waagent[2004]: 2026-03-07T01:24:33.597953Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 01:24:33.599462 waagent[2004]: 2026-03-07T01:24:33.599417Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 7 01:24:33.645232 waagent[2004]: 2026-03-07T01:24:33.645114Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BFD86894-AB1E-4E78-9844-1CB3037887B1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 7 01:24:33.645428 waagent[2004]: 2026-03-07T01:24:33.645376Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 01:24:33.645428 waagent[2004]: Executing ['ip', '-a', '-o', 'link']: Mar 7 01:24:33.645428 waagent[2004]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 01:24:33.645428 waagent[2004]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:85:fd:63 brd ff:ff:ff:ff:ff:ff Mar 7 01:24:33.645428 waagent[2004]: 3: enP49613s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:85:fd:63 brd ff:ff:ff:ff:ff:ff\ altname enP49613p0s2 Mar 7 01:24:33.645428 waagent[2004]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 01:24:33.645428 waagent[2004]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 01:24:33.645428 waagent[2004]: 2: eth0 inet 10.200.20.23/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 01:24:33.645428 waagent[2004]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 01:24:33.645428 waagent[2004]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 01:24:33.645428 waagent[2004]: 2: eth0 inet6 fe80::7eed:8dff:fe85:fd63/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 01:24:33.703370 waagent[2004]: 2026-03-07T01:24:33.703056Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 7 01:24:33.703370 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.703370 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.703370 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.703370 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.703370 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.703370 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.703370 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:24:33.703370 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:24:33.703370 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:24:33.705960 waagent[2004]: 2026-03-07T01:24:33.705899Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 01:24:33.705960 waagent[2004]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.705960 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.705960 waagent[2004]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.705960 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.705960 waagent[2004]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:24:33.705960 waagent[2004]: pkts bytes target prot opt in out source destination Mar 7 01:24:33.705960 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:24:33.705960 waagent[2004]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:24:33.705960 waagent[2004]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:24:33.706201 waagent[2004]: 2026-03-07T01:24:33.706169Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 01:24:39.342716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:24:39.351446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:24:39.415475 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:24:39.425539 systemd[1]: Started sshd@0-10.200.20.23:22-10.200.16.10:52622.service - OpenSSH per-connection server daemon (10.200.16.10:52622). Mar 7 01:24:39.527468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:24:39.531138 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:24:39.571356 kubelet[2247]: E0307 01:24:39.571287 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:24:39.576487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:24:39.576637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:24:40.415046 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 52622 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:40.416312 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:40.421083 systemd-logind[1773]: New session 3 of user core. Mar 7 01:24:40.426520 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:24:40.846662 systemd[1]: Started sshd@1-10.200.20.23:22-10.200.16.10:33948.service - OpenSSH per-connection server daemon (10.200.16.10:33948). Mar 7 01:24:41.334650 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 33948 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:41.335544 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:41.339784 systemd-logind[1773]: New session 4 of user core. Mar 7 01:24:41.346669 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:24:41.692494 sshd[2259]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:41.695557 systemd[1]: sshd@1-10.200.20.23:22-10.200.16.10:33948.service: Deactivated successfully. Mar 7 01:24:41.698835 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:24:41.699694 systemd-logind[1773]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:24:41.700840 systemd-logind[1773]: Removed session 4. Mar 7 01:24:41.776504 systemd[1]: Started sshd@2-10.200.20.23:22-10.200.16.10:33954.service - OpenSSH per-connection server daemon (10.200.16.10:33954). Mar 7 01:24:42.259470 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 33954 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:42.260257 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:42.264349 systemd-logind[1773]: New session 5 of user core. Mar 7 01:24:42.274507 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:24:42.607357 sshd[2267]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:42.610255 systemd[1]: sshd@2-10.200.20.23:22-10.200.16.10:33954.service: Deactivated successfully. Mar 7 01:24:42.612958 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:24:42.613763 systemd-logind[1773]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:24:42.614549 systemd-logind[1773]: Removed session 5. Mar 7 01:24:42.700599 systemd[1]: Started sshd@3-10.200.20.23:22-10.200.16.10:33958.service - OpenSSH per-connection server daemon (10.200.16.10:33958). Mar 7 01:24:43.182264 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 33958 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:43.183036 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:43.187426 systemd-logind[1773]: New session 6 of user core. Mar 7 01:24:43.193527 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:24:43.530473 sshd[2275]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:43.533715 systemd[1]: sshd@3-10.200.20.23:22-10.200.16.10:33958.service: Deactivated successfully. Mar 7 01:24:43.536545 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:24:43.537263 systemd-logind[1773]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:24:43.538147 systemd-logind[1773]: Removed session 6. Mar 7 01:24:43.619729 systemd[1]: Started sshd@4-10.200.20.23:22-10.200.16.10:33970.service - OpenSSH per-connection server daemon (10.200.16.10:33970). Mar 7 01:24:44.105059 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 33970 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:44.105843 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:44.109725 systemd-logind[1773]: New session 7 of user core. Mar 7 01:24:44.115502 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:24:44.485439 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:24:44.485714 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:24:44.515073 sudo[2287]: pam_unix(sudo:session): session closed for user root Mar 7 01:24:44.592375 sshd[2283]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:44.596529 systemd[1]: sshd@4-10.200.20.23:22-10.200.16.10:33970.service: Deactivated successfully. Mar 7 01:24:44.599448 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:24:44.600216 systemd-logind[1773]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:24:44.601346 systemd-logind[1773]: Removed session 7. Mar 7 01:24:44.683595 systemd[1]: Started sshd@5-10.200.20.23:22-10.200.16.10:33980.service - OpenSSH per-connection server daemon (10.200.16.10:33980). Mar 7 01:24:45.169185 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 33980 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:45.170054 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:45.173564 systemd-logind[1773]: New session 8 of user core. Mar 7 01:24:45.184499 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:24:45.443906 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:24:45.444550 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:24:45.447766 sudo[2297]: pam_unix(sudo:session): session closed for user root Mar 7 01:24:45.451879 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:24:45.452383 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:24:45.465570 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:24:45.466924 auditctl[2300]: No rules Mar 7 01:24:45.468037 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:24:45.468282 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:24:45.480642 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:24:45.499246 augenrules[2319]: No rules Mar 7 01:24:45.501659 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:24:45.503274 sudo[2296]: pam_unix(sudo:session): session closed for user root Mar 7 01:24:45.581180 sshd[2292]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:45.585460 systemd-logind[1773]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:24:45.585949 systemd[1]: sshd@5-10.200.20.23:22-10.200.16.10:33980.service: Deactivated successfully. Mar 7 01:24:45.588538 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:24:45.589667 systemd-logind[1773]: Removed session 8. Mar 7 01:24:45.666670 systemd[1]: Started sshd@6-10.200.20.23:22-10.200.16.10:33992.service - OpenSSH per-connection server daemon (10.200.16.10:33992). Mar 7 01:24:46.151815 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 33992 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:24:46.153090 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:46.156637 systemd-logind[1773]: New session 9 of user core. Mar 7 01:24:46.169763 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:24:46.426683 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:24:46.426946 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:24:47.586500 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:24:47.586705 (dockerd)[2347]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:24:48.196062 dockerd[2347]: time="2026-03-07T01:24:48.196008780Z" level=info msg="Starting up" Mar 7 01:24:48.511011 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1963240209-merged.mount: Deactivated successfully. Mar 7 01:24:48.768869 dockerd[2347]: time="2026-03-07T01:24:48.768787860Z" level=info msg="Loading containers: start." Mar 7 01:24:48.922466 kernel: Initializing XFRM netlink socket Mar 7 01:24:49.078032 systemd-networkd[1380]: docker0: Link UP Mar 7 01:24:49.099338 dockerd[2347]: time="2026-03-07T01:24:49.099286300Z" level=info msg="Loading containers: done." Mar 7 01:24:49.114948 dockerd[2347]: time="2026-03-07T01:24:49.114907540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:24:49.115076 dockerd[2347]: time="2026-03-07T01:24:49.115005300Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:24:49.115114 dockerd[2347]: time="2026-03-07T01:24:49.115097100Z" level=info msg="Daemon has completed initialization" Mar 7 01:24:49.176458 dockerd[2347]: time="2026-03-07T01:24:49.176395580Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:24:49.176809 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:24:49.557308 containerd[1792]: time="2026-03-07T01:24:49.557269820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:24:49.592664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:24:49.597494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:24:49.700146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:24:49.704216 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:24:49.821574 kubelet[2496]: E0307 01:24:49.820995 2496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:24:49.825489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:24:49.825640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:24:50.767337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520687300.mount: Deactivated successfully. Mar 7 01:24:51.389086 chronyd[1771]: Selected source PHC0 Mar 7 01:24:51.796400 containerd[1792]: time="2026-03-07T01:24:51.795812419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:51.800056 containerd[1792]: time="2026-03-07T01:24:51.799867611Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 7 01:24:51.803292 containerd[1792]: time="2026-03-07T01:24:51.803055601Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:51.807674 containerd[1792]: time="2026-03-07T01:24:51.807631785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:51.809109 containerd[1792]: time="2026-03-07T01:24:51.808697448Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.251376549s" Mar 7 01:24:51.809109 containerd[1792]: time="2026-03-07T01:24:51.808733965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 01:24:51.809235 containerd[1792]: time="2026-03-07T01:24:51.809212402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:24:53.019698 containerd[1792]: time="2026-03-07T01:24:53.019634396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:53.022112 containerd[1792]: time="2026-03-07T01:24:53.021905752Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 7 01:24:53.026605 containerd[1792]: time="2026-03-07T01:24:53.026580744Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:53.033728 containerd[1792]: time="2026-03-07T01:24:53.033697091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:53.034839 containerd[1792]: time="2026-03-07T01:24:53.034808289Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.225565529s" Mar 7 01:24:53.034898 containerd[1792]: time="2026-03-07T01:24:53.034838489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 01:24:53.035449 containerd[1792]: time="2026-03-07T01:24:53.035267248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:24:54.386336 containerd[1792]: time="2026-03-07T01:24:54.385642124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:54.388138 containerd[1792]: time="2026-03-07T01:24:54.388110480Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 7 01:24:54.391767 containerd[1792]: time="2026-03-07T01:24:54.391744513Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:54.396328 containerd[1792]: time="2026-03-07T01:24:54.396267624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:54.397631 containerd[1792]: time="2026-03-07T01:24:54.397276782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.361972654s" Mar 7 01:24:54.397631 containerd[1792]: time="2026-03-07T01:24:54.397319782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 01:24:54.398404 containerd[1792]: time="2026-03-07T01:24:54.398372940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:24:55.486582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289304002.mount: Deactivated successfully. Mar 7 01:24:55.812028 containerd[1792]: time="2026-03-07T01:24:55.811924539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:55.814436 containerd[1792]: time="2026-03-07T01:24:55.814406934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 7 01:24:55.818567 containerd[1792]: time="2026-03-07T01:24:55.817692608Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:55.821730 containerd[1792]: time="2026-03-07T01:24:55.821704440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:55.822594 containerd[1792]: time="2026-03-07T01:24:55.822571279Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.424102059s" Mar 7 01:24:55.822779 containerd[1792]: time="2026-03-07T01:24:55.822763478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 01:24:55.823503 containerd[1792]: time="2026-03-07T01:24:55.823475277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:24:56.524601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255576953.mount: Deactivated successfully. Mar 7 01:24:57.732285 containerd[1792]: time="2026-03-07T01:24:57.732234376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:57.737328 containerd[1792]: time="2026-03-07T01:24:57.737279407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 7 01:24:57.741951 containerd[1792]: time="2026-03-07T01:24:57.741560239Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:57.746659 containerd[1792]: time="2026-03-07T01:24:57.746629429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:57.747687 containerd[1792]: time="2026-03-07T01:24:57.747657227Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.92398667s" Mar 7 01:24:57.747742 containerd[1792]: time="2026-03-07T01:24:57.747688067Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 01:24:57.748494 containerd[1792]: time="2026-03-07T01:24:57.748472825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:24:58.340937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389396869.mount: Deactivated successfully. Mar 7 01:24:58.359725 containerd[1792]: time="2026-03-07T01:24:58.359682386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:58.363021 containerd[1792]: time="2026-03-07T01:24:58.362993100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 01:24:58.367278 containerd[1792]: time="2026-03-07T01:24:58.367252892Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:58.371812 containerd[1792]: time="2026-03-07T01:24:58.371778363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:24:58.373234 containerd[1792]: time="2026-03-07T01:24:58.373205280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 624.608255ms" Mar 7 01:24:58.373286 containerd[1792]: time="2026-03-07T01:24:58.373240160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 01:24:58.373757 containerd[1792]: time="2026-03-07T01:24:58.373732799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:24:59.011603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812989686.mount: Deactivated successfully. Mar 7 01:24:59.842692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:24:59.849433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:00.924462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:00.934705 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:25:00.965990 kubelet[2659]: E0307 01:25:00.965926 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:25:00.969252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:25:00.969431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:25:01.683592 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 7 01:25:09.296420 containerd[1792]: time="2026-03-07T01:25:09.296367322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:09.298895 containerd[1792]: time="2026-03-07T01:25:09.298675039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 7 01:25:09.345324 containerd[1792]: time="2026-03-07T01:25:09.344192931Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:09.393157 containerd[1792]: time="2026-03-07T01:25:09.393121578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:09.394269 containerd[1792]: time="2026-03-07T01:25:09.394239737Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 11.020476218s" Mar 7 01:25:09.394328 containerd[1792]: time="2026-03-07T01:25:09.394271096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 01:25:11.092686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:25:11.098701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:11.261531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:11.262116 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:25:11.315335 kubelet[2749]: E0307 01:25:11.314703 2749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:25:11.317547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:25:11.317682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:25:13.336892 update_engine[1775]: I20260307 01:25:13.336328 1775 update_attempter.cc:509] Updating boot flags... Mar 7 01:25:14.898306 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2768) Mar 7 01:25:15.004379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2767) Mar 7 01:25:17.158732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:17.168499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:17.194185 systemd[1]: Reloading requested from client PID 2831 ('systemctl') (unit session-9.scope)... Mar 7 01:25:17.194202 systemd[1]: Reloading... Mar 7 01:25:17.300325 zram_generator::config[2875]: No configuration found. Mar 7 01:25:17.406521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:25:17.490876 systemd[1]: Reloading finished in 296 ms. Mar 7 01:25:17.538003 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:17.540607 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:25:17.540838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:17.547069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:22.343458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:22.350624 (kubelet)[2953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:25:22.389454 kubelet[2953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:25:22.389454 kubelet[2953]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:25:22.389454 kubelet[2953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:25:22.389823 kubelet[2953]: I0307 01:25:22.389489 2953 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:25:23.991158 kubelet[2953]: I0307 01:25:23.991126 2953 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:25:23.993346 kubelet[2953]: I0307 01:25:23.991554 2953 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:25:23.993346 kubelet[2953]: I0307 01:25:23.991768 2953 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:25:24.019452 kubelet[2953]: E0307 01:25:24.019408 2953 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:25:24.019987 kubelet[2953]: I0307 01:25:24.019889 2953 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:25:24.029964 kubelet[2953]: E0307 01:25:24.029920 2953 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:25:24.029964 kubelet[2953]: I0307 01:25:24.029966 2953 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:25:24.033248 kubelet[2953]: I0307 01:25:24.033220 2953 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:25:24.034901 kubelet[2953]: I0307 01:25:24.034870 2953 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:25:24.035054 kubelet[2953]: I0307 01:25:24.034901 2953 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-0072e04abc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:25:24.035135 kubelet[2953]: I0307 01:25:24.035055 2953 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:25:24.035135 kubelet[2953]: I0307 01:25:24.035063 2953 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:25:24.035209 kubelet[2953]: I0307 01:25:24.035195 2953 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:25:24.038007 kubelet[2953]: I0307 01:25:24.037989 2953 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:25:24.038049 kubelet[2953]: I0307 01:25:24.038012 2953 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:25:24.038049 kubelet[2953]: I0307 01:25:24.038039 2953 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:25:24.038090 kubelet[2953]: I0307 01:25:24.038053 2953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:25:24.042154 kubelet[2953]: E0307 01:25:24.042128 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:25:24.042486 kubelet[2953]: E0307 01:25:24.042387 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0072e04abc&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:25:24.042718 kubelet[2953]: I0307 01:25:24.042706 2953 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:25:24.043282 kubelet[2953]: I0307 01:25:24.043259 2953 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:25:24.043354 kubelet[2953]: W0307 01:25:24.043332 2953 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:25:24.046322 kubelet[2953]: I0307 01:25:24.046153 2953 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:25:24.046322 kubelet[2953]: I0307 01:25:24.046190 2953 server.go:1289] "Started kubelet" Mar 7 01:25:24.046611 kubelet[2953]: I0307 01:25:24.046572 2953 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:25:24.047340 kubelet[2953]: I0307 01:25:24.047318 2953 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:25:24.049180 kubelet[2953]: I0307 01:25:24.048713 2953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:25:24.049180 kubelet[2953]: I0307 01:25:24.048999 2953 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:25:24.050029 kubelet[2953]: E0307 01:25:24.049091 2953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-0072e04abc.189a6ab3dbfc9e25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-0072e04abc,UID:ci-4081.3.6-n-0072e04abc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-0072e04abc,},FirstTimestamp:2026-03-07 01:25:24.046167589 +0000 UTC m=+1.690377821,LastTimestamp:2026-03-07 01:25:24.046167589 +0000 UTC m=+1.690377821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-0072e04abc,}" Mar 7 01:25:24.052331 kubelet[2953]: I0307 01:25:24.051716 2953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:25:24.053006 kubelet[2953]: I0307 01:25:24.051772 2953 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:25:24.053328 kubelet[2953]: I0307 01:25:24.053295 2953 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:25:24.053775 kubelet[2953]: I0307 01:25:24.053759 2953 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:25:24.053876 kubelet[2953]: I0307 01:25:24.053867 2953 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:25:24.054423 kubelet[2953]: E0307 01:25:24.054405 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:25:24.055052 kubelet[2953]: I0307 01:25:24.054837 2953 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:25:24.055052 kubelet[2953]: I0307 01:25:24.054914 2953 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:25:24.055877 kubelet[2953]: E0307 01:25:24.055858 2953 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-0072e04abc\" not found" Mar 7 01:25:24.056039 kubelet[2953]: E0307 01:25:24.056018 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0072e04abc?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="200ms" Mar 7 01:25:24.056624 kubelet[2953]: I0307 01:25:24.056609 2953 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:25:24.083325 kubelet[2953]: I0307 01:25:24.083030 2953 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:25:24.083974 kubelet[2953]: I0307 01:25:24.083958 2953 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:25:24.084060 kubelet[2953]: I0307 01:25:24.084051 2953 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:25:24.084126 kubelet[2953]: I0307 01:25:24.084116 2953 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:25:24.084392 kubelet[2953]: I0307 01:25:24.084165 2953 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:25:24.084392 kubelet[2953]: E0307 01:25:24.084207 2953 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:25:24.090598 kubelet[2953]: E0307 01:25:24.090571 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:25:24.091375 kubelet[2953]: I0307 01:25:24.091357 2953 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:25:24.091375 kubelet[2953]: I0307 01:25:24.091372 2953 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:25:24.091458 kubelet[2953]: I0307 01:25:24.091398 2953 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:25:24.097514 kubelet[2953]: I0307 01:25:24.097488 2953 policy_none.go:49] "None policy: Start" Mar 7 01:25:24.097514 kubelet[2953]: I0307 01:25:24.097517 2953 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:25:24.097643 kubelet[2953]: I0307 01:25:24.097528 2953 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:25:24.104081 kubelet[2953]: E0307 01:25:24.104056 2953 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:25:24.104250 kubelet[2953]: I0307 01:25:24.104235 2953 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:25:24.104286 kubelet[2953]: I0307 01:25:24.104251 2953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:25:24.105557 kubelet[2953]: I0307 01:25:24.105542 2953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:25:24.108318 kubelet[2953]: E0307 01:25:24.108290 2953 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:25:24.108392 kubelet[2953]: E0307 01:25:24.108350 2953 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-0072e04abc\" not found" Mar 7 01:25:24.194850 kubelet[2953]: E0307 01:25:24.194387 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.200684 kubelet[2953]: E0307 01:25:24.200646 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.205611 kubelet[2953]: I0307 01:25:24.205596 2953 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.205990 kubelet[2953]: E0307 01:25:24.205970 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.209520 kubelet[2953]: E0307 01:25:24.209494 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255276 kubelet[2953]: I0307 01:25:24.255186 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255276 kubelet[2953]: I0307 01:25:24.255219 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255276 kubelet[2953]: I0307 01:25:24.255239 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255276 kubelet[2953]: I0307 01:25:24.255254 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255276 kubelet[2953]: I0307 01:25:24.255274 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255523 kubelet[2953]: I0307 01:25:24.255292 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255523 kubelet[2953]: I0307 01:25:24.255326 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255523 kubelet[2953]: I0307 01:25:24.255342 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/059650575c575c71873ea35a2ee7cd0b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-0072e04abc\" (UID: \"059650575c575c71873ea35a2ee7cd0b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.255523 kubelet[2953]: I0307 01:25:24.255364 2953 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.256638 kubelet[2953]: E0307 01:25:24.256584 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0072e04abc?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="400ms" Mar 7 01:25:24.407919 kubelet[2953]: I0307 01:25:24.407891 2953 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.408215 kubelet[2953]: E0307 01:25:24.408195 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.495711 containerd[1792]: time="2026-03-07T01:25:24.495449669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-0072e04abc,Uid:f835707f326376fef3605fba275bb760,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:24.501697 containerd[1792]: time="2026-03-07T01:25:24.501663980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-0072e04abc,Uid:03fd1b9f8c22804df51653cc7db909fb,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:24.510625 containerd[1792]: time="2026-03-07T01:25:24.510402248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-0072e04abc,Uid:059650575c575c71873ea35a2ee7cd0b,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:24.657127 kubelet[2953]: E0307 01:25:24.657090 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0072e04abc?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="800ms" Mar 7 01:25:24.810512 kubelet[2953]: I0307 01:25:24.810429 2953 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:24.810762 kubelet[2953]: E0307 01:25:24.810729 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:25.119977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909431602.mount: Deactivated successfully. Mar 7 01:25:25.142775 containerd[1792]: time="2026-03-07T01:25:25.141912428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:25:25.144500 containerd[1792]: time="2026-03-07T01:25:25.144456784Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:25:25.147154 containerd[1792]: time="2026-03-07T01:25:25.147122221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 7 01:25:25.149237 containerd[1792]: time="2026-03-07T01:25:25.149197538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:25:25.153095 containerd[1792]: time="2026-03-07T01:25:25.152097334Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:25:25.154204 containerd[1792]: time="2026-03-07T01:25:25.154177171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:25:25.156807 containerd[1792]: time="2026-03-07T01:25:25.156755167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:25:25.162069 containerd[1792]: time="2026-03-07T01:25:25.160992721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:25:25.162069 containerd[1792]: time="2026-03-07T01:25:25.161832560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 666.309971ms" Mar 7 01:25:25.166324 containerd[1792]: time="2026-03-07T01:25:25.165868714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 655.416987ms" Mar 7 01:25:25.166657 containerd[1792]: time="2026-03-07T01:25:25.166634313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 664.911093ms" Mar 7 01:25:25.415452 kubelet[2953]: E0307 01:25:25.415342 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:25:25.458078 kubelet[2953]: E0307 01:25:25.458032 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0072e04abc?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="1.6s" Mar 7 01:25:25.472741 kubelet[2953]: E0307 01:25:25.472717 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0072e04abc&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:25:25.514341 kubelet[2953]: E0307 01:25:25.514211 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:25:25.613225 kubelet[2953]: I0307 01:25:25.612858 2953 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:25.613225 kubelet[2953]: E0307 01:25:25.613147 2953 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:25.669408 kubelet[2953]: E0307 01:25:25.669290 2953 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:25:25.905672 containerd[1792]: time="2026-03-07T01:25:25.905333981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:25.905672 containerd[1792]: time="2026-03-07T01:25:25.905386741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:25.905672 containerd[1792]: time="2026-03-07T01:25:25.905402181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.907098 containerd[1792]: time="2026-03-07T01:25:25.906696139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.908588 containerd[1792]: time="2026-03-07T01:25:25.907239658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:25.908588 containerd[1792]: time="2026-03-07T01:25:25.907287698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:25.908588 containerd[1792]: time="2026-03-07T01:25:25.907338898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.908588 containerd[1792]: time="2026-03-07T01:25:25.907428698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.909888 containerd[1792]: time="2026-03-07T01:25:25.909810894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:25.909966 containerd[1792]: time="2026-03-07T01:25:25.909911654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:25.909991 containerd[1792]: time="2026-03-07T01:25:25.909954934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.910325 containerd[1792]: time="2026-03-07T01:25:25.910250694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:25.974713 containerd[1792]: time="2026-03-07T01:25:25.974608282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-0072e04abc,Uid:03fd1b9f8c22804df51653cc7db909fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0baf02b894c31a4316366cc488ef8fa05584027961c576a5f0b7acd058277d94\"" Mar 7 01:25:25.975860 containerd[1792]: time="2026-03-07T01:25:25.975829960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-0072e04abc,Uid:059650575c575c71873ea35a2ee7cd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffdc5a9b49f6130b029c5c0e7eed758fb0f12f411194d306a3953b3af8fc8ddc\"" Mar 7 01:25:25.979143 containerd[1792]: time="2026-03-07T01:25:25.979113316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-0072e04abc,Uid:f835707f326376fef3605fba275bb760,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5a8e6c962e8aca60f071fb182b58d60c8676a7efedd150e3cab8a3a4fdb645c\"" Mar 7 01:25:25.985706 containerd[1792]: time="2026-03-07T01:25:25.985672066Z" level=info msg="CreateContainer within sandbox \"ffdc5a9b49f6130b029c5c0e7eed758fb0f12f411194d306a3953b3af8fc8ddc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:25:25.990200 containerd[1792]: time="2026-03-07T01:25:25.990095380Z" level=info msg="CreateContainer within sandbox \"0baf02b894c31a4316366cc488ef8fa05584027961c576a5f0b7acd058277d94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:25:25.994175 containerd[1792]: time="2026-03-07T01:25:25.994142374Z" level=info msg="CreateContainer within sandbox \"d5a8e6c962e8aca60f071fb182b58d60c8676a7efedd150e3cab8a3a4fdb645c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:25:26.038850 containerd[1792]: time="2026-03-07T01:25:26.038805511Z" level=info msg="CreateContainer within sandbox \"ffdc5a9b49f6130b029c5c0e7eed758fb0f12f411194d306a3953b3af8fc8ddc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02983d0aa874f0e7ef7fbedcf2d6507908a6ab028f1635838f1efed956fb0dbd\"" Mar 7 01:25:26.039427 containerd[1792]: time="2026-03-07T01:25:26.039405190Z" level=info msg="StartContainer for \"02983d0aa874f0e7ef7fbedcf2d6507908a6ab028f1635838f1efed956fb0dbd\"" Mar 7 01:25:26.057435 containerd[1792]: time="2026-03-07T01:25:26.057393684Z" level=info msg="CreateContainer within sandbox \"0baf02b894c31a4316366cc488ef8fa05584027961c576a5f0b7acd058277d94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1abe33de38d8a4e6ed8e1d515c7d8fcb6f80cdb50013930be500f38ef2c7c41\"" Mar 7 01:25:26.059131 containerd[1792]: time="2026-03-07T01:25:26.058011923Z" level=info msg="StartContainer for \"f1abe33de38d8a4e6ed8e1d515c7d8fcb6f80cdb50013930be500f38ef2c7c41\"" Mar 7 01:25:26.064224 containerd[1792]: time="2026-03-07T01:25:26.064190634Z" level=info msg="CreateContainer within sandbox \"d5a8e6c962e8aca60f071fb182b58d60c8676a7efedd150e3cab8a3a4fdb645c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e028d5aab9bd584565f458e628452770b65c4e868d68d247279c37437c6b394\"" Mar 7 01:25:26.065118 containerd[1792]: time="2026-03-07T01:25:26.065089193Z" level=info msg="StartContainer for \"9e028d5aab9bd584565f458e628452770b65c4e868d68d247279c37437c6b394\"" Mar 7 01:25:26.124138 kubelet[2953]: E0307 01:25:26.124099 2953 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:25:26.144035 containerd[1792]: time="2026-03-07T01:25:26.143684481Z" level=info msg="StartContainer for \"02983d0aa874f0e7ef7fbedcf2d6507908a6ab028f1635838f1efed956fb0dbd\" returns successfully" Mar 7 01:25:26.166421 containerd[1792]: time="2026-03-07T01:25:26.166372649Z" level=info msg="StartContainer for \"f1abe33de38d8a4e6ed8e1d515c7d8fcb6f80cdb50013930be500f38ef2c7c41\" returns successfully" Mar 7 01:25:26.216000 containerd[1792]: time="2026-03-07T01:25:26.215731699Z" level=info msg="StartContainer for \"9e028d5aab9bd584565f458e628452770b65c4e868d68d247279c37437c6b394\" returns successfully" Mar 7 01:25:27.124211 kubelet[2953]: E0307 01:25:27.123446 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:27.129225 kubelet[2953]: E0307 01:25:27.128850 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:27.132584 kubelet[2953]: E0307 01:25:27.132566 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:27.217152 kubelet[2953]: I0307 01:25:27.216458 2953 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:28.138461 kubelet[2953]: E0307 01:25:28.138096 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:28.139158 kubelet[2953]: E0307 01:25:28.139141 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:28.139605 kubelet[2953]: E0307 01:25:28.139494 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.139062 kubelet[2953]: E0307 01:25:29.139034 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.139409 kubelet[2953]: E0307 01:25:29.139379 2953 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.205431 kubelet[2953]: E0307 01:25:29.205401 2953 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-0072e04abc\" not found" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.223424 kubelet[2953]: I0307 01:25:29.223398 2953 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.223525 kubelet[2953]: E0307 01:25:29.223430 2953 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-0072e04abc\": node \"ci-4081.3.6-n-0072e04abc\" not found" Mar 7 01:25:29.261115 kubelet[2953]: I0307 01:25:29.261083 2953 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.306316 kubelet[2953]: E0307 01:25:29.305368 2953 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.306316 kubelet[2953]: I0307 01:25:29.305401 2953 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.309468 kubelet[2953]: E0307 01:25:29.309434 2953 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.309468 kubelet[2953]: I0307 01:25:29.309460 2953 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:29.317164 kubelet[2953]: E0307 01:25:29.317127 2953 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-0072e04abc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:30.043725 kubelet[2953]: I0307 01:25:30.043497 2953 apiserver.go:52] "Watching apiserver" Mar 7 01:25:30.054629 kubelet[2953]: I0307 01:25:30.054592 2953 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:25:31.468491 systemd[1]: Reloading requested from client PID 3235 ('systemctl') (unit session-9.scope)... Mar 7 01:25:31.468881 systemd[1]: Reloading... Mar 7 01:25:31.543318 zram_generator::config[3272]: No configuration found. Mar 7 01:25:31.656493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:25:31.747926 systemd[1]: Reloading finished in 278 ms. Mar 7 01:25:31.776080 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:31.789268 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:25:31.789637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:31.797071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:25:31.935467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:25:31.940908 (kubelet)[3349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:25:32.001444 kubelet[3349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:25:32.001444 kubelet[3349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:25:32.001444 kubelet[3349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:25:32.001444 kubelet[3349]: I0307 01:25:32.001401 3349 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:25:32.007873 kubelet[3349]: I0307 01:25:32.006513 3349 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:25:32.007873 kubelet[3349]: I0307 01:25:32.006537 3349 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:25:32.007873 kubelet[3349]: I0307 01:25:32.006714 3349 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:25:32.008023 kubelet[3349]: I0307 01:25:32.007977 3349 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:25:32.010646 kubelet[3349]: I0307 01:25:32.010378 3349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:25:32.019737 kubelet[3349]: E0307 01:25:32.019701 3349 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:25:32.019737 kubelet[3349]: I0307 01:25:32.019737 3349 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:25:32.022276 kubelet[3349]: I0307 01:25:32.022260 3349 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:25:32.022666 kubelet[3349]: I0307 01:25:32.022639 3349 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:25:32.022795 kubelet[3349]: I0307 01:25:32.022666 3349 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-0072e04abc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:25:32.022883 kubelet[3349]: I0307 01:25:32.022799 3349 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:25:32.022883 kubelet[3349]: I0307 01:25:32.022808 3349 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:25:32.022883 kubelet[3349]: I0307 01:25:32.022849 3349 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:25:32.022974 kubelet[3349]: I0307 01:25:32.022960 3349 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:25:32.022974 kubelet[3349]: I0307 01:25:32.022974 3349 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:25:32.023015 kubelet[3349]: I0307 01:25:32.022996 3349 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:25:32.023015 kubelet[3349]: I0307 01:25:32.023009 3349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:25:32.028011 kubelet[3349]: I0307 01:25:32.025091 3349 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:25:32.028011 kubelet[3349]: I0307 01:25:32.026969 3349 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:25:32.037128 kubelet[3349]: I0307 01:25:32.037058 3349 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:25:32.037128 kubelet[3349]: I0307 01:25:32.037099 3349 server.go:1289] "Started kubelet" Mar 7 01:25:32.041222 kubelet[3349]: I0307 01:25:32.039062 3349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:25:32.052465 kubelet[3349]: I0307 01:25:32.052332 3349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:25:32.053807 kubelet[3349]: I0307 01:25:32.053086 3349 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:25:32.061883 kubelet[3349]: I0307 01:25:32.061827 3349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:25:32.062046 kubelet[3349]: I0307 01:25:32.062025 3349 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:25:32.062217 kubelet[3349]: I0307 01:25:32.062198 3349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:25:32.067370 kubelet[3349]: I0307 01:25:32.066970 3349 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:25:32.070324 kubelet[3349]: I0307 01:25:32.069648 3349 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:25:32.070324 kubelet[3349]: I0307 01:25:32.069786 3349 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:25:32.070324 kubelet[3349]: I0307 01:25:32.070251 3349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:25:32.071866 kubelet[3349]: I0307 01:25:32.071809 3349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:25:32.072143 kubelet[3349]: E0307 01:25:32.072116 3349 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:25:32.073338 kubelet[3349]: I0307 01:25:32.072767 3349 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:25:32.073338 kubelet[3349]: I0307 01:25:32.072784 3349 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:25:32.089775 kubelet[3349]: I0307 01:25:32.089750 3349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:25:32.089907 kubelet[3349]: I0307 01:25:32.089897 3349 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:25:32.089966 kubelet[3349]: I0307 01:25:32.089957 3349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:25:32.090011 kubelet[3349]: I0307 01:25:32.090005 3349 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:25:32.090095 kubelet[3349]: E0307 01:25:32.090079 3349 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:25:32.125101 kubelet[3349]: I0307 01:25:32.125041 3349 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:25:32.125253 kubelet[3349]: I0307 01:25:32.125241 3349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:25:32.125319 kubelet[3349]: I0307 01:25:32.125311 3349 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:25:32.125482 kubelet[3349]: I0307 01:25:32.125470 3349 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:25:32.125550 kubelet[3349]: I0307 01:25:32.125531 3349 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:25:32.125599 kubelet[3349]: I0307 01:25:32.125592 3349 policy_none.go:49] "None policy: Start" Mar 7 01:25:32.125647 kubelet[3349]: I0307 01:25:32.125640 3349 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:25:32.125707 kubelet[3349]: I0307 01:25:32.125699 3349 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:25:32.125843 kubelet[3349]: I0307 01:25:32.125834 3349 state_mem.go:75] "Updated machine memory state" Mar 7 01:25:32.126898 kubelet[3349]: E0307 01:25:32.126882 3349 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:25:32.127109 kubelet[3349]: I0307 01:25:32.127097 3349 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:25:32.127191 kubelet[3349]: I0307 01:25:32.127167 3349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:25:32.128026 kubelet[3349]: I0307 01:25:32.127862 3349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:25:32.129360 kubelet[3349]: E0307 01:25:32.129332 3349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:25:32.191076 kubelet[3349]: I0307 01:25:32.191025 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.191562 kubelet[3349]: I0307 01:25:32.191538 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.191779 kubelet[3349]: I0307 01:25:32.191681 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.196884 kubelet[3349]: I0307 01:25:32.196855 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:32.204862 kubelet[3349]: I0307 01:25:32.204315 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:32.204862 kubelet[3349]: I0307 01:25:32.204447 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:32.234658 kubelet[3349]: I0307 01:25:32.234637 3349 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.244873 kubelet[3349]: I0307 01:25:32.244842 3349 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.245037 kubelet[3349]: I0307 01:25:32.244929 3349 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271252 kubelet[3349]: I0307 01:25:32.270571 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271252 kubelet[3349]: I0307 01:25:32.270608 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271252 kubelet[3349]: I0307 01:25:32.270628 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271252 kubelet[3349]: I0307 01:25:32.270644 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/059650575c575c71873ea35a2ee7cd0b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-0072e04abc\" (UID: \"059650575c575c71873ea35a2ee7cd0b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271252 kubelet[3349]: I0307 01:25:32.270659 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271455 kubelet[3349]: I0307 01:25:32.270672 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271455 kubelet[3349]: I0307 01:25:32.270687 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271455 kubelet[3349]: I0307 01:25:32.270702 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f835707f326376fef3605fba275bb760-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" (UID: \"f835707f326376fef3605fba275bb760\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:32.271455 kubelet[3349]: I0307 01:25:32.270715 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03fd1b9f8c22804df51653cc7db909fb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" (UID: \"03fd1b9f8c22804df51653cc7db909fb\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.023681 kubelet[3349]: I0307 01:25:33.023593 3349 apiserver.go:52] "Watching apiserver" Mar 7 01:25:33.070340 kubelet[3349]: I0307 01:25:33.070284 3349 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:25:33.107623 kubelet[3349]: I0307 01:25:33.107591 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.108186 kubelet[3349]: I0307 01:25:33.108150 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.108396 kubelet[3349]: I0307 01:25:33.108379 3349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.125394 kubelet[3349]: I0307 01:25:33.125360 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:33.125506 kubelet[3349]: I0307 01:25:33.125406 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:33.125506 kubelet[3349]: E0307 01:25:33.125440 3349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-0072e04abc\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.125648 kubelet[3349]: I0307 01:25:33.125634 3349 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:25:33.125712 kubelet[3349]: E0307 01:25:33.125667 3349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-0072e04abc\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.125776 kubelet[3349]: E0307 01:25:33.125694 3349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-0072e04abc\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" Mar 7 01:25:33.147608 kubelet[3349]: I0307 01:25:33.147535 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0072e04abc" podStartSLOduration=1.147514693 podStartE2EDuration="1.147514693s" podCreationTimestamp="2026-03-07 01:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:33.137134748 +0000 UTC m=+1.191891118" watchObservedRunningTime="2026-03-07 01:25:33.147514693 +0000 UTC m=+1.202271063" Mar 7 01:25:33.159053 kubelet[3349]: I0307 01:25:33.158763 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0072e04abc" podStartSLOduration=1.158749596 podStartE2EDuration="1.158749596s" podCreationTimestamp="2026-03-07 01:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:33.149664729 +0000 UTC m=+1.204421099" watchObservedRunningTime="2026-03-07 01:25:33.158749596 +0000 UTC m=+1.213505966" Mar 7 01:25:33.159053 kubelet[3349]: I0307 01:25:33.158882 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0072e04abc" podStartSLOduration=1.158877076 podStartE2EDuration="1.158877076s" podCreationTimestamp="2026-03-07 01:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:33.158144437 +0000 UTC m=+1.212900807" watchObservedRunningTime="2026-03-07 01:25:33.158877076 +0000 UTC m=+1.213633406" Mar 7 01:25:37.594981 kubelet[3349]: I0307 01:25:37.594953 3349 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:25:37.596373 kubelet[3349]: I0307 01:25:37.595656 3349 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:25:37.596420 containerd[1792]: time="2026-03-07T01:25:37.595430474Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:25:38.502326 kubelet[3349]: I0307 01:25:38.502279 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18fbbe19-e6c0-4a02-b5ee-c05e8438f278-lib-modules\") pod \"kube-proxy-nlw6z\" (UID: \"18fbbe19-e6c0-4a02-b5ee-c05e8438f278\") " pod="kube-system/kube-proxy-nlw6z" Mar 7 01:25:38.502326 kubelet[3349]: I0307 01:25:38.502325 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkf6\" (UniqueName: \"kubernetes.io/projected/18fbbe19-e6c0-4a02-b5ee-c05e8438f278-kube-api-access-flkf6\") pod \"kube-proxy-nlw6z\" (UID: \"18fbbe19-e6c0-4a02-b5ee-c05e8438f278\") " pod="kube-system/kube-proxy-nlw6z" Mar 7 01:25:38.502519 kubelet[3349]: I0307 01:25:38.502347 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18fbbe19-e6c0-4a02-b5ee-c05e8438f278-kube-proxy\") pod \"kube-proxy-nlw6z\" (UID: \"18fbbe19-e6c0-4a02-b5ee-c05e8438f278\") " pod="kube-system/kube-proxy-nlw6z" Mar 7 01:25:38.502519 kubelet[3349]: I0307 01:25:38.502362 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18fbbe19-e6c0-4a02-b5ee-c05e8438f278-xtables-lock\") pod \"kube-proxy-nlw6z\" (UID: \"18fbbe19-e6c0-4a02-b5ee-c05e8438f278\") " pod="kube-system/kube-proxy-nlw6z" Mar 7 01:25:38.773963 containerd[1792]: time="2026-03-07T01:25:38.772785333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlw6z,Uid:18fbbe19-e6c0-4a02-b5ee-c05e8438f278,Namespace:kube-system,Attempt:0,}" Mar 7 01:25:38.804619 kubelet[3349]: I0307 01:25:38.804577 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19ff4cd1-0d98-47d1-be14-bb1e3e987521-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-bcsj2\" (UID: \"19ff4cd1-0d98-47d1-be14-bb1e3e987521\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bcsj2" Mar 7 01:25:38.804619 kubelet[3349]: I0307 01:25:38.804620 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbn8\" (UniqueName: \"kubernetes.io/projected/19ff4cd1-0d98-47d1-be14-bb1e3e987521-kube-api-access-dkbn8\") pod \"tigera-operator-6bf85f8dd-bcsj2\" (UID: \"19ff4cd1-0d98-47d1-be14-bb1e3e987521\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bcsj2" Mar 7 01:25:38.846475 containerd[1792]: time="2026-03-07T01:25:38.844462227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:38.846475 containerd[1792]: time="2026-03-07T01:25:38.844510107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:38.846475 containerd[1792]: time="2026-03-07T01:25:38.844520787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:38.846475 containerd[1792]: time="2026-03-07T01:25:38.844596147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:38.887136 containerd[1792]: time="2026-03-07T01:25:38.886992884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlw6z,Uid:18fbbe19-e6c0-4a02-b5ee-c05e8438f278,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f572d40130288ebe3e646c3ded80f7e62ebbd6af3fa8ed8a95d5d92544e9b72\"" Mar 7 01:25:38.895237 containerd[1792]: time="2026-03-07T01:25:38.895129512Z" level=info msg="CreateContainer within sandbox \"4f572d40130288ebe3e646c3ded80f7e62ebbd6af3fa8ed8a95d5d92544e9b72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:25:38.934731 containerd[1792]: time="2026-03-07T01:25:38.934439694Z" level=info msg="CreateContainer within sandbox \"4f572d40130288ebe3e646c3ded80f7e62ebbd6af3fa8ed8a95d5d92544e9b72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfbdc250b04ec6eba0ab2edc0b10304963d593fa8a7f64a2dfb03daba9e41090\"" Mar 7 01:25:38.936324 containerd[1792]: time="2026-03-07T01:25:38.935515492Z" level=info msg="StartContainer for \"bfbdc250b04ec6eba0ab2edc0b10304963d593fa8a7f64a2dfb03daba9e41090\"" Mar 7 01:25:38.996515 containerd[1792]: time="2026-03-07T01:25:38.996470482Z" level=info msg="StartContainer for \"bfbdc250b04ec6eba0ab2edc0b10304963d593fa8a7f64a2dfb03daba9e41090\" returns successfully" Mar 7 01:25:39.091031 containerd[1792]: time="2026-03-07T01:25:39.090917942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bcsj2,Uid:19ff4cd1-0d98-47d1-be14-bb1e3e987521,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:25:39.131142 kubelet[3349]: I0307 01:25:39.130868 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nlw6z" podStartSLOduration=1.130851123 podStartE2EDuration="1.130851123s" podCreationTimestamp="2026-03-07 01:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:25:39.130558564 +0000 UTC m=+7.185314934" watchObservedRunningTime="2026-03-07 01:25:39.130851123 +0000 UTC m=+7.185607493" Mar 7 01:25:39.135882 containerd[1792]: time="2026-03-07T01:25:39.135688196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:39.135882 containerd[1792]: time="2026-03-07T01:25:39.135732196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:39.135882 containerd[1792]: time="2026-03-07T01:25:39.135741956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:39.135882 containerd[1792]: time="2026-03-07T01:25:39.135815956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:39.173135 containerd[1792]: time="2026-03-07T01:25:39.173102461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bcsj2,Uid:19ff4cd1-0d98-47d1-be14-bb1e3e987521,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"185faabab541066712cc0231479ec6211c91ed2422f76dbca9b4e904627bd177\"" Mar 7 01:25:39.174529 containerd[1792]: time="2026-03-07T01:25:39.174510059Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:25:41.138419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144841485.mount: Deactivated successfully. Mar 7 01:25:41.664910 containerd[1792]: time="2026-03-07T01:25:41.664816214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:41.667218 containerd[1792]: time="2026-03-07T01:25:41.667184771Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 01:25:41.670836 containerd[1792]: time="2026-03-07T01:25:41.670808886Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:41.675077 containerd[1792]: time="2026-03-07T01:25:41.675027199Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:41.675953 containerd[1792]: time="2026-03-07T01:25:41.675838038Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.50098446s" Mar 7 01:25:41.675953 containerd[1792]: time="2026-03-07T01:25:41.675869918Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 01:25:41.683282 containerd[1792]: time="2026-03-07T01:25:41.683254227Z" level=info msg="CreateContainer within sandbox \"185faabab541066712cc0231479ec6211c91ed2422f76dbca9b4e904627bd177\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:25:41.710429 containerd[1792]: time="2026-03-07T01:25:41.710283428Z" level=info msg="CreateContainer within sandbox \"185faabab541066712cc0231479ec6211c91ed2422f76dbca9b4e904627bd177\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d76a952c6154b767c9922eb0604b29d69dc1befbefa3738dcae3f89a33825b0\"" Mar 7 01:25:41.711773 containerd[1792]: time="2026-03-07T01:25:41.711016987Z" level=info msg="StartContainer for \"3d76a952c6154b767c9922eb0604b29d69dc1befbefa3738dcae3f89a33825b0\"" Mar 7 01:25:41.762639 containerd[1792]: time="2026-03-07T01:25:41.762598551Z" level=info msg="StartContainer for \"3d76a952c6154b767c9922eb0604b29d69dc1befbefa3738dcae3f89a33825b0\" returns successfully" Mar 7 01:25:47.468025 sudo[2332]: pam_unix(sudo:session): session closed for user root Mar 7 01:25:47.545586 sshd[2328]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:47.551832 systemd-logind[1773]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:25:47.552931 systemd[1]: sshd@6-10.200.20.23:22-10.200.16.10:33992.service: Deactivated successfully. Mar 7 01:25:47.555920 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:25:47.557511 systemd-logind[1773]: Removed session 9. Mar 7 01:25:53.620744 kubelet[3349]: I0307 01:25:53.620683 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-bcsj2" podStartSLOduration=13.11800449 podStartE2EDuration="15.620666988s" podCreationTimestamp="2026-03-07 01:25:38 +0000 UTC" firstStartedPulling="2026-03-07 01:25:39.174138699 +0000 UTC m=+7.228895069" lastFinishedPulling="2026-03-07 01:25:41.676801197 +0000 UTC m=+9.731557567" observedRunningTime="2026-03-07 01:25:42.176067347 +0000 UTC m=+10.230823717" watchObservedRunningTime="2026-03-07 01:25:53.620666988 +0000 UTC m=+21.675423358" Mar 7 01:25:53.693342 kubelet[3349]: I0307 01:25:53.693295 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/202e52c6-f939-4b6f-8c81-3ce5c3fae76a-typha-certs\") pod \"calico-typha-75cb9c6786-zwlfk\" (UID: \"202e52c6-f939-4b6f-8c81-3ce5c3fae76a\") " pod="calico-system/calico-typha-75cb9c6786-zwlfk" Mar 7 01:25:53.693472 kubelet[3349]: I0307 01:25:53.693349 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br29t\" (UniqueName: \"kubernetes.io/projected/202e52c6-f939-4b6f-8c81-3ce5c3fae76a-kube-api-access-br29t\") pod \"calico-typha-75cb9c6786-zwlfk\" (UID: \"202e52c6-f939-4b6f-8c81-3ce5c3fae76a\") " pod="calico-system/calico-typha-75cb9c6786-zwlfk" Mar 7 01:25:53.693472 kubelet[3349]: I0307 01:25:53.693370 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/202e52c6-f939-4b6f-8c81-3ce5c3fae76a-tigera-ca-bundle\") pod \"calico-typha-75cb9c6786-zwlfk\" (UID: \"202e52c6-f939-4b6f-8c81-3ce5c3fae76a\") " pod="calico-system/calico-typha-75cb9c6786-zwlfk" Mar 7 01:25:53.795179 kubelet[3349]: I0307 01:25:53.794354 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-bpffs\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.795179 kubelet[3349]: I0307 01:25:53.794415 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-cni-bin-dir\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.795179 kubelet[3349]: I0307 01:25:53.794431 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-cni-log-dir\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.795179 kubelet[3349]: I0307 01:25:53.794446 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e957ebc4-a5c0-4c29-9784-51d1988e0008-node-certs\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.795179 kubelet[3349]: I0307 01:25:53.794482 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-policysync\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797399 kubelet[3349]: I0307 01:25:53.794501 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfft\" (UniqueName: \"kubernetes.io/projected/e957ebc4-a5c0-4c29-9784-51d1988e0008-kube-api-access-9kfft\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797399 kubelet[3349]: I0307 01:25:53.794516 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-var-run-calico\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797399 kubelet[3349]: I0307 01:25:53.794531 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-cni-net-dir\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797399 kubelet[3349]: I0307 01:25:53.794570 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-nodeproc\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797399 kubelet[3349]: I0307 01:25:53.794593 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e957ebc4-a5c0-4c29-9784-51d1988e0008-tigera-ca-bundle\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797534 kubelet[3349]: I0307 01:25:53.794630 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-var-lib-calico\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797534 kubelet[3349]: I0307 01:25:53.794688 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-lib-modules\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797534 kubelet[3349]: I0307 01:25:53.794708 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-flexvol-driver-host\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797534 kubelet[3349]: I0307 01:25:53.794731 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-sys-fs\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.797534 kubelet[3349]: I0307 01:25:53.794745 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e957ebc4-a5c0-4c29-9784-51d1988e0008-xtables-lock\") pod \"calico-node-fk6gd\" (UID: \"e957ebc4-a5c0-4c29-9784-51d1988e0008\") " pod="calico-system/calico-node-fk6gd" Mar 7 01:25:53.837063 kubelet[3349]: E0307 01:25:53.836943 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:25:53.896529 kubelet[3349]: I0307 01:25:53.895873 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d63cfcaa-807b-4baa-81f4-479a9dbfdd0f-kubelet-dir\") pod \"csi-node-driver-m29f2\" (UID: \"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f\") " pod="calico-system/csi-node-driver-m29f2" Mar 7 01:25:53.896732 kubelet[3349]: I0307 01:25:53.896714 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d63cfcaa-807b-4baa-81f4-479a9dbfdd0f-registration-dir\") pod \"csi-node-driver-m29f2\" (UID: \"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f\") " pod="calico-system/csi-node-driver-m29f2" Mar 7 01:25:53.897129 kubelet[3349]: I0307 01:25:53.897112 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d63cfcaa-807b-4baa-81f4-479a9dbfdd0f-varrun\") pod \"csi-node-driver-m29f2\" (UID: \"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f\") " pod="calico-system/csi-node-driver-m29f2" Mar 7 01:25:53.897290 kubelet[3349]: I0307 01:25:53.897203 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92tp7\" (UniqueName: \"kubernetes.io/projected/d63cfcaa-807b-4baa-81f4-479a9dbfdd0f-kube-api-access-92tp7\") pod \"csi-node-driver-m29f2\" (UID: \"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f\") " pod="calico-system/csi-node-driver-m29f2" Mar 7 01:25:53.898516 kubelet[3349]: I0307 01:25:53.898432 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d63cfcaa-807b-4baa-81f4-479a9dbfdd0f-socket-dir\") pod \"csi-node-driver-m29f2\" (UID: \"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f\") " pod="calico-system/csi-node-driver-m29f2" Mar 7 01:25:53.898874 kubelet[3349]: E0307 01:25:53.898765 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.898874 kubelet[3349]: W0307 01:25:53.898804 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.898874 kubelet[3349]: E0307 01:25:53.898829 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.899417 kubelet[3349]: E0307 01:25:53.899358 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.899417 kubelet[3349]: W0307 01:25:53.899372 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.899417 kubelet[3349]: E0307 01:25:53.899384 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.899855 kubelet[3349]: E0307 01:25:53.899781 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.899855 kubelet[3349]: W0307 01:25:53.899793 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.899855 kubelet[3349]: E0307 01:25:53.899804 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.900158 kubelet[3349]: E0307 01:25:53.900079 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.900158 kubelet[3349]: W0307 01:25:53.900089 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.900158 kubelet[3349]: E0307 01:25:53.900119 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.900588 kubelet[3349]: E0307 01:25:53.900483 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.900588 kubelet[3349]: W0307 01:25:53.900495 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.900588 kubelet[3349]: E0307 01:25:53.900505 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.900792 kubelet[3349]: E0307 01:25:53.900731 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.900792 kubelet[3349]: W0307 01:25:53.900741 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.900792 kubelet[3349]: E0307 01:25:53.900751 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.901069 kubelet[3349]: E0307 01:25:53.901007 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.901069 kubelet[3349]: W0307 01:25:53.901018 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.901069 kubelet[3349]: E0307 01:25:53.901028 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.901361 kubelet[3349]: E0307 01:25:53.901282 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.901361 kubelet[3349]: W0307 01:25:53.901308 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.901361 kubelet[3349]: E0307 01:25:53.901321 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.901697 kubelet[3349]: E0307 01:25:53.901589 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.901697 kubelet[3349]: W0307 01:25:53.901600 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.901697 kubelet[3349]: E0307 01:25:53.901610 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.901881 kubelet[3349]: E0307 01:25:53.901824 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.901881 kubelet[3349]: W0307 01:25:53.901833 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.901881 kubelet[3349]: E0307 01:25:53.901842 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.902153 kubelet[3349]: E0307 01:25:53.902090 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.902153 kubelet[3349]: W0307 01:25:53.902101 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.902153 kubelet[3349]: E0307 01:25:53.902110 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.902469 kubelet[3349]: E0307 01:25:53.902366 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.902469 kubelet[3349]: W0307 01:25:53.902377 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.902469 kubelet[3349]: E0307 01:25:53.902386 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.903234 kubelet[3349]: E0307 01:25:53.903166 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.903234 kubelet[3349]: W0307 01:25:53.903202 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.903529 kubelet[3349]: E0307 01:25:53.903216 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.903905 kubelet[3349]: E0307 01:25:53.903819 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.903905 kubelet[3349]: W0307 01:25:53.903833 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.903905 kubelet[3349]: E0307 01:25:53.903844 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.904173 kubelet[3349]: E0307 01:25:53.904162 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.904335 kubelet[3349]: W0307 01:25:53.904224 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.904335 kubelet[3349]: E0307 01:25:53.904237 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.905095 kubelet[3349]: E0307 01:25:53.904957 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.905095 kubelet[3349]: W0307 01:25:53.904974 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.905095 kubelet[3349]: E0307 01:25:53.904986 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.905483 kubelet[3349]: E0307 01:25:53.905460 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.905634 kubelet[3349]: W0307 01:25:53.905557 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.905634 kubelet[3349]: E0307 01:25:53.905573 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.905945 kubelet[3349]: E0307 01:25:53.905845 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.905945 kubelet[3349]: W0307 01:25:53.905868 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.905945 kubelet[3349]: E0307 01:25:53.905879 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.906221 kubelet[3349]: E0307 01:25:53.906158 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.906221 kubelet[3349]: W0307 01:25:53.906172 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.906221 kubelet[3349]: E0307 01:25:53.906182 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.906612 kubelet[3349]: E0307 01:25:53.906544 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.906612 kubelet[3349]: W0307 01:25:53.906555 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.906612 kubelet[3349]: E0307 01:25:53.906566 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.906982 kubelet[3349]: E0307 01:25:53.906970 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.907157 kubelet[3349]: W0307 01:25:53.907023 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.907157 kubelet[3349]: E0307 01:25:53.907038 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.907436 kubelet[3349]: E0307 01:25:53.907418 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.907588 kubelet[3349]: W0307 01:25:53.907527 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.907588 kubelet[3349]: E0307 01:25:53.907545 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.907893 kubelet[3349]: E0307 01:25:53.907828 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.907893 kubelet[3349]: W0307 01:25:53.907838 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.907893 kubelet[3349]: E0307 01:25:53.907848 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.908226 kubelet[3349]: E0307 01:25:53.908110 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.908226 kubelet[3349]: W0307 01:25:53.908121 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.908226 kubelet[3349]: E0307 01:25:53.908131 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.908515 kubelet[3349]: E0307 01:25:53.908415 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.908515 kubelet[3349]: W0307 01:25:53.908427 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.908515 kubelet[3349]: E0307 01:25:53.908469 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.908823 kubelet[3349]: E0307 01:25:53.908743 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.908823 kubelet[3349]: W0307 01:25:53.908755 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.908823 kubelet[3349]: E0307 01:25:53.908765 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.909143 kubelet[3349]: E0307 01:25:53.909056 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.909143 kubelet[3349]: W0307 01:25:53.909067 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.909143 kubelet[3349]: E0307 01:25:53.909078 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910357 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.910961 kubelet[3349]: W0307 01:25:53.910373 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910387 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910590 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.910961 kubelet[3349]: W0307 01:25:53.910598 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910607 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910752 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.910961 kubelet[3349]: W0307 01:25:53.910762 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.910961 kubelet[3349]: E0307 01:25:53.910771 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.914898 kubelet[3349]: E0307 01:25:53.912726 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.914898 kubelet[3349]: W0307 01:25:53.912746 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.914898 kubelet[3349]: E0307 01:25:53.912759 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.916210 kubelet[3349]: E0307 01:25:53.916189 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.916210 kubelet[3349]: W0307 01:25:53.916205 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.916404 kubelet[3349]: E0307 01:25:53.916218 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.917196 kubelet[3349]: E0307 01:25:53.917174 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.917196 kubelet[3349]: W0307 01:25:53.917195 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.917289 kubelet[3349]: E0307 01:25:53.917211 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.920472 kubelet[3349]: E0307 01:25:53.918599 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.920472 kubelet[3349]: W0307 01:25:53.918620 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.920472 kubelet[3349]: E0307 01:25:53.918635 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.921482 kubelet[3349]: E0307 01:25:53.921466 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.922337 kubelet[3349]: W0307 01:25:53.922318 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.922423 kubelet[3349]: E0307 01:25:53.922413 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.923920 kubelet[3349]: E0307 01:25:53.923905 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.924015 kubelet[3349]: W0307 01:25:53.924003 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.924073 kubelet[3349]: E0307 01:25:53.924063 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.924293 kubelet[3349]: E0307 01:25:53.924282 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.924390 kubelet[3349]: W0307 01:25:53.924378 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.924442 kubelet[3349]: E0307 01:25:53.924431 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.930055 kubelet[3349]: E0307 01:25:53.930036 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.933542 kubelet[3349]: W0307 01:25:53.932322 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.933542 kubelet[3349]: E0307 01:25:53.932349 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:53.934566 containerd[1792]: time="2026-03-07T01:25:53.934197150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cb9c6786-zwlfk,Uid:202e52c6-f939-4b6f-8c81-3ce5c3fae76a,Namespace:calico-system,Attempt:0,}" Mar 7 01:25:53.974208 containerd[1792]: time="2026-03-07T01:25:53.974102494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:53.974208 containerd[1792]: time="2026-03-07T01:25:53.974152014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:53.974779 containerd[1792]: time="2026-03-07T01:25:53.974743773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:53.974991 containerd[1792]: time="2026-03-07T01:25:53.974944933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:53.999712 kubelet[3349]: E0307 01:25:53.999593 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:53.999990 kubelet[3349]: W0307 01:25:53.999613 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:53.999990 kubelet[3349]: E0307 01:25:53.999800 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.000542 kubelet[3349]: E0307 01:25:54.000400 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.000542 kubelet[3349]: W0307 01:25:54.000413 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.000542 kubelet[3349]: E0307 01:25:54.000424 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.000759 kubelet[3349]: E0307 01:25:54.000677 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.000759 kubelet[3349]: W0307 01:25:54.000688 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.000759 kubelet[3349]: E0307 01:25:54.000700 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.001019 kubelet[3349]: E0307 01:25:54.000833 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.001019 kubelet[3349]: W0307 01:25:54.000844 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.001019 kubelet[3349]: E0307 01:25:54.000853 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.001019 kubelet[3349]: E0307 01:25:54.000965 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.001019 kubelet[3349]: W0307 01:25:54.000972 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.001019 kubelet[3349]: E0307 01:25:54.000979 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.001576 kubelet[3349]: E0307 01:25:54.001137 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.001576 kubelet[3349]: W0307 01:25:54.001144 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.001576 kubelet[3349]: E0307 01:25:54.001189 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.001916 kubelet[3349]: E0307 01:25:54.001901 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.001916 kubelet[3349]: W0307 01:25:54.001914 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.002213 kubelet[3349]: E0307 01:25:54.001926 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.002213 kubelet[3349]: E0307 01:25:54.002128 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.002213 kubelet[3349]: W0307 01:25:54.002139 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.002213 kubelet[3349]: E0307 01:25:54.002149 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.004438 kubelet[3349]: E0307 01:25:54.002321 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.004438 kubelet[3349]: W0307 01:25:54.002330 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.004438 kubelet[3349]: E0307 01:25:54.002340 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.004766 kubelet[3349]: E0307 01:25:54.004536 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.004766 kubelet[3349]: W0307 01:25:54.004550 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.004766 kubelet[3349]: E0307 01:25:54.004563 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.004894 kubelet[3349]: E0307 01:25:54.004882 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.004943 kubelet[3349]: W0307 01:25:54.004933 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.005000 kubelet[3349]: E0307 01:25:54.004989 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.005203 kubelet[3349]: E0307 01:25:54.005192 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.005388 kubelet[3349]: W0307 01:25:54.005266 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.005388 kubelet[3349]: E0307 01:25:54.005281 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.005523 kubelet[3349]: E0307 01:25:54.005512 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.005572 kubelet[3349]: W0307 01:25:54.005562 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.005620 kubelet[3349]: E0307 01:25:54.005611 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.005829 kubelet[3349]: E0307 01:25:54.005818 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.006001 kubelet[3349]: W0307 01:25:54.005882 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.006001 kubelet[3349]: E0307 01:25:54.005896 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.006117 kubelet[3349]: E0307 01:25:54.006106 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.006165 kubelet[3349]: W0307 01:25:54.006155 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.006214 kubelet[3349]: E0307 01:25:54.006204 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.006606 kubelet[3349]: E0307 01:25:54.006587 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.006606 kubelet[3349]: W0307 01:25:54.006604 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.006704 kubelet[3349]: E0307 01:25:54.006619 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.006856 kubelet[3349]: E0307 01:25:54.006838 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.006856 kubelet[3349]: W0307 01:25:54.006852 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.006975 kubelet[3349]: E0307 01:25:54.006863 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.007230 kubelet[3349]: E0307 01:25:54.007213 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.007230 kubelet[3349]: W0307 01:25:54.007229 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.007317 kubelet[3349]: E0307 01:25:54.007239 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.007619 kubelet[3349]: E0307 01:25:54.007601 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.007619 kubelet[3349]: W0307 01:25:54.007618 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.007683 kubelet[3349]: E0307 01:25:54.007629 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.008601 kubelet[3349]: E0307 01:25:54.008500 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.008601 kubelet[3349]: W0307 01:25:54.008513 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.008601 kubelet[3349]: E0307 01:25:54.008524 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.009228 kubelet[3349]: E0307 01:25:54.008843 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.009425 kubelet[3349]: W0307 01:25:54.009332 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.009425 kubelet[3349]: E0307 01:25:54.009353 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.009770 kubelet[3349]: E0307 01:25:54.009651 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.009770 kubelet[3349]: W0307 01:25:54.009665 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.009770 kubelet[3349]: E0307 01:25:54.009675 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.011433 kubelet[3349]: E0307 01:25:54.011275 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.011433 kubelet[3349]: W0307 01:25:54.011289 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.011433 kubelet[3349]: E0307 01:25:54.011329 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.012765 kubelet[3349]: E0307 01:25:54.012747 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.012932 kubelet[3349]: W0307 01:25:54.012865 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.012932 kubelet[3349]: E0307 01:25:54.012889 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.013490 kubelet[3349]: E0307 01:25:54.013441 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.013490 kubelet[3349]: W0307 01:25:54.013454 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.013490 kubelet[3349]: E0307 01:25:54.013467 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.023054 kubelet[3349]: E0307 01:25:54.023031 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:54.023054 kubelet[3349]: W0307 01:25:54.023049 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:54.023187 kubelet[3349]: E0307 01:25:54.023169 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:54.023593 containerd[1792]: time="2026-03-07T01:25:54.023563305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fk6gd,Uid:e957ebc4-a5c0-4c29-9784-51d1988e0008,Namespace:calico-system,Attempt:0,}" Mar 7 01:25:54.028976 containerd[1792]: time="2026-03-07T01:25:54.028950658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cb9c6786-zwlfk,Uid:202e52c6-f939-4b6f-8c81-3ce5c3fae76a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d26ba51d0ee0fe77123e3f57838b22baba44774f73a5e0e422017fe53c3993b\"" Mar 7 01:25:54.030971 containerd[1792]: time="2026-03-07T01:25:54.030819215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:25:54.058396 containerd[1792]: time="2026-03-07T01:25:54.058329577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:25:54.058519 containerd[1792]: time="2026-03-07T01:25:54.058380897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:25:54.058585 containerd[1792]: time="2026-03-07T01:25:54.058558576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:54.059099 containerd[1792]: time="2026-03-07T01:25:54.058989416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:25:54.084999 containerd[1792]: time="2026-03-07T01:25:54.084960780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fk6gd,Uid:e957ebc4-a5c0-4c29-9784-51d1988e0008,Namespace:calico-system,Attempt:0,} returns sandbox id \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\"" Mar 7 01:25:55.251668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888557534.mount: Deactivated successfully. Mar 7 01:25:56.102745 kubelet[3349]: E0307 01:25:56.102141 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:25:56.470181 containerd[1792]: time="2026-03-07T01:25:56.469389206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:56.471883 containerd[1792]: time="2026-03-07T01:25:56.471855843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 01:25:56.474178 containerd[1792]: time="2026-03-07T01:25:56.474157280Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:56.478567 containerd[1792]: time="2026-03-07T01:25:56.478537315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:56.480698 containerd[1792]: time="2026-03-07T01:25:56.480653072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.449802177s" Mar 7 01:25:56.480698 containerd[1792]: time="2026-03-07T01:25:56.480698312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 01:25:56.483516 containerd[1792]: time="2026-03-07T01:25:56.483493068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:25:56.502462 containerd[1792]: time="2026-03-07T01:25:56.502427845Z" level=info msg="CreateContainer within sandbox \"6d26ba51d0ee0fe77123e3f57838b22baba44774f73a5e0e422017fe53c3993b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:25:56.537780 containerd[1792]: time="2026-03-07T01:25:56.537647641Z" level=info msg="CreateContainer within sandbox \"6d26ba51d0ee0fe77123e3f57838b22baba44774f73a5e0e422017fe53c3993b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f1cdda936710cd516ea60d9fca0b3159334e51b8d88f8c15309405dd82b0f7c9\"" Mar 7 01:25:56.539201 containerd[1792]: time="2026-03-07T01:25:56.538398120Z" level=info msg="StartContainer for \"f1cdda936710cd516ea60d9fca0b3159334e51b8d88f8c15309405dd82b0f7c9\"" Mar 7 01:25:56.592973 containerd[1792]: time="2026-03-07T01:25:56.592924413Z" level=info msg="StartContainer for \"f1cdda936710cd516ea60d9fca0b3159334e51b8d88f8c15309405dd82b0f7c9\" returns successfully" Mar 7 01:25:57.168401 kubelet[3349]: I0307 01:25:57.167867 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75cb9c6786-zwlfk" podStartSLOduration=1.7148977269999999 podStartE2EDuration="4.16785254s" podCreationTimestamp="2026-03-07 01:25:53 +0000 UTC" firstStartedPulling="2026-03-07 01:25:54.030011536 +0000 UTC m=+22.084767866" lastFinishedPulling="2026-03-07 01:25:56.482966309 +0000 UTC m=+24.537722679" observedRunningTime="2026-03-07 01:25:57.16757414 +0000 UTC m=+25.222330470" watchObservedRunningTime="2026-03-07 01:25:57.16785254 +0000 UTC m=+25.222608910" Mar 7 01:25:57.209712 kubelet[3349]: E0307 01:25:57.209585 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.209712 kubelet[3349]: W0307 01:25:57.209613 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.209712 kubelet[3349]: E0307 01:25:57.209633 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.210040 kubelet[3349]: E0307 01:25:57.209912 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.210040 kubelet[3349]: W0307 01:25:57.209924 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.210040 kubelet[3349]: E0307 01:25:57.209961 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.210190 kubelet[3349]: E0307 01:25:57.210180 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.210240 kubelet[3349]: W0307 01:25:57.210230 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.210305 kubelet[3349]: E0307 01:25:57.210286 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.210501 kubelet[3349]: E0307 01:25:57.210489 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.210673 kubelet[3349]: W0307 01:25:57.210569 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.210673 kubelet[3349]: E0307 01:25:57.210586 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.210790 kubelet[3349]: E0307 01:25:57.210780 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.210843 kubelet[3349]: W0307 01:25:57.210833 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.210896 kubelet[3349]: E0307 01:25:57.210887 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.211160 kubelet[3349]: E0307 01:25:57.211091 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.211160 kubelet[3349]: W0307 01:25:57.211102 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.211160 kubelet[3349]: E0307 01:25:57.211112 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.211507 kubelet[3349]: E0307 01:25:57.211418 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.211507 kubelet[3349]: W0307 01:25:57.211430 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.211507 kubelet[3349]: E0307 01:25:57.211440 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.211651 kubelet[3349]: E0307 01:25:57.211641 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.211710 kubelet[3349]: W0307 01:25:57.211700 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.211839 kubelet[3349]: E0307 01:25:57.211756 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.211934 kubelet[3349]: E0307 01:25:57.211923 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.211982 kubelet[3349]: W0307 01:25:57.211973 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.212028 kubelet[3349]: E0307 01:25:57.212020 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.212200 kubelet[3349]: E0307 01:25:57.212189 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.212360 kubelet[3349]: W0307 01:25:57.212261 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.212360 kubelet[3349]: E0307 01:25:57.212277 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.212483 kubelet[3349]: E0307 01:25:57.212472 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.212537 kubelet[3349]: W0307 01:25:57.212527 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.212587 kubelet[3349]: E0307 01:25:57.212578 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.212951 kubelet[3349]: E0307 01:25:57.212848 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.212951 kubelet[3349]: W0307 01:25:57.212860 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.212951 kubelet[3349]: E0307 01:25:57.212872 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.213105 kubelet[3349]: E0307 01:25:57.213094 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.213231 kubelet[3349]: W0307 01:25:57.213145 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.213231 kubelet[3349]: E0307 01:25:57.213159 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.213384 kubelet[3349]: E0307 01:25:57.213373 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.213439 kubelet[3349]: W0307 01:25:57.213430 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.213493 kubelet[3349]: E0307 01:25:57.213482 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.213741 kubelet[3349]: E0307 01:25:57.213664 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.213741 kubelet[3349]: W0307 01:25:57.213675 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.213741 kubelet[3349]: E0307 01:25:57.213684 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.228113 kubelet[3349]: E0307 01:25:57.228092 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.228113 kubelet[3349]: W0307 01:25:57.228109 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.228237 kubelet[3349]: E0307 01:25:57.228124 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.228339 kubelet[3349]: E0307 01:25:57.228326 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.228339 kubelet[3349]: W0307 01:25:57.228338 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.228400 kubelet[3349]: E0307 01:25:57.228348 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.228531 kubelet[3349]: E0307 01:25:57.228519 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.228531 kubelet[3349]: W0307 01:25:57.228530 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.228587 kubelet[3349]: E0307 01:25:57.228538 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.228758 kubelet[3349]: E0307 01:25:57.228747 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.228758 kubelet[3349]: W0307 01:25:57.228757 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.228824 kubelet[3349]: E0307 01:25:57.228766 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.228947 kubelet[3349]: E0307 01:25:57.228936 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.228947 kubelet[3349]: W0307 01:25:57.228946 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.229005 kubelet[3349]: E0307 01:25:57.228955 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.229111 kubelet[3349]: E0307 01:25:57.229098 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.229111 kubelet[3349]: W0307 01:25:57.229108 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.229165 kubelet[3349]: E0307 01:25:57.229119 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.229332 kubelet[3349]: E0307 01:25:57.229320 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.229332 kubelet[3349]: W0307 01:25:57.229331 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.229412 kubelet[3349]: E0307 01:25:57.229339 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.229673 kubelet[3349]: E0307 01:25:57.229660 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.229673 kubelet[3349]: W0307 01:25:57.229672 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.229727 kubelet[3349]: E0307 01:25:57.229682 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.229851 kubelet[3349]: E0307 01:25:57.229836 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.229851 kubelet[3349]: W0307 01:25:57.229849 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.229913 kubelet[3349]: E0307 01:25:57.229858 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.230017 kubelet[3349]: E0307 01:25:57.230007 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.230017 kubelet[3349]: W0307 01:25:57.230016 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.230070 kubelet[3349]: E0307 01:25:57.230025 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.230194 kubelet[3349]: E0307 01:25:57.230183 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.230236 kubelet[3349]: W0307 01:25:57.230194 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.230262 kubelet[3349]: E0307 01:25:57.230239 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.230437 kubelet[3349]: E0307 01:25:57.230425 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.230437 kubelet[3349]: W0307 01:25:57.230436 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.230505 kubelet[3349]: E0307 01:25:57.230445 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.230623 kubelet[3349]: E0307 01:25:57.230609 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.230623 kubelet[3349]: W0307 01:25:57.230622 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.230676 kubelet[3349]: E0307 01:25:57.230631 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.230922 kubelet[3349]: E0307 01:25:57.230906 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.230922 kubelet[3349]: W0307 01:25:57.230921 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.230982 kubelet[3349]: E0307 01:25:57.230930 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.231143 kubelet[3349]: E0307 01:25:57.231069 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.231143 kubelet[3349]: W0307 01:25:57.231078 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.231143 kubelet[3349]: E0307 01:25:57.231087 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.231247 kubelet[3349]: E0307 01:25:57.231234 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.231247 kubelet[3349]: W0307 01:25:57.231244 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.231309 kubelet[3349]: E0307 01:25:57.231252 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.231702 kubelet[3349]: E0307 01:25:57.231685 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.231702 kubelet[3349]: W0307 01:25:57.231700 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.231769 kubelet[3349]: E0307 01:25:57.231710 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.231884 kubelet[3349]: E0307 01:25:57.231869 3349 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:25:57.231884 kubelet[3349]: W0307 01:25:57.231881 3349 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:25:57.231930 kubelet[3349]: E0307 01:25:57.231890 3349 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:25:57.591505 containerd[1792]: time="2026-03-07T01:25:57.590893255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:57.593820 containerd[1792]: time="2026-03-07T01:25:57.593788611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 01:25:57.599075 containerd[1792]: time="2026-03-07T01:25:57.598758405Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:57.603020 containerd[1792]: time="2026-03-07T01:25:57.602993160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:25:57.603616 containerd[1792]: time="2026-03-07T01:25:57.603585679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.118779052s" Mar 7 01:25:57.603675 containerd[1792]: time="2026-03-07T01:25:57.603617519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 01:25:57.610373 containerd[1792]: time="2026-03-07T01:25:57.610190671Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:25:57.640368 containerd[1792]: time="2026-03-07T01:25:57.640332394Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40\"" Mar 7 01:25:57.642536 containerd[1792]: time="2026-03-07T01:25:57.641107113Z" level=info msg="StartContainer for \"b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40\"" Mar 7 01:25:57.696126 containerd[1792]: time="2026-03-07T01:25:57.696066685Z" level=info msg="StartContainer for \"b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40\" returns successfully" Mar 7 01:25:58.091333 kubelet[3349]: E0307 01:25:58.090511 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:25:58.158157 kubelet[3349]: I0307 01:25:58.157563 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:25:58.489223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40-rootfs.mount: Deactivated successfully. Mar 7 01:25:58.855129 containerd[1792]: time="2026-03-07T01:25:58.854781447Z" level=info msg="shim disconnected" id=b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40 namespace=k8s.io Mar 7 01:25:58.855129 containerd[1792]: time="2026-03-07T01:25:58.855125607Z" level=warning msg="cleaning up after shim disconnected" id=b7af3d14d896fa43192e16cc31945718163789d16599a748912ded9cc05ced40 namespace=k8s.io Mar 7 01:25:58.855129 containerd[1792]: time="2026-03-07T01:25:58.855138927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:25:59.161554 containerd[1792]: time="2026-03-07T01:25:59.161485507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:26:00.090968 kubelet[3349]: E0307 01:26:00.090587 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:02.090986 kubelet[3349]: E0307 01:26:02.090916 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:03.219803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167552956.mount: Deactivated successfully. Mar 7 01:26:03.259230 containerd[1792]: time="2026-03-07T01:26:03.259184745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:03.261827 containerd[1792]: time="2026-03-07T01:26:03.261798102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 01:26:03.264895 containerd[1792]: time="2026-03-07T01:26:03.264869778Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:03.268713 containerd[1792]: time="2026-03-07T01:26:03.268664573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:03.269893 containerd[1792]: time="2026-03-07T01:26:03.269221293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.107691586s" Mar 7 01:26:03.269893 containerd[1792]: time="2026-03-07T01:26:03.269251253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 01:26:03.277826 containerd[1792]: time="2026-03-07T01:26:03.277791082Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:26:03.458196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602936764.mount: Deactivated successfully. Mar 7 01:26:03.466634 containerd[1792]: time="2026-03-07T01:26:03.466600208Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1\"" Mar 7 01:26:03.467418 containerd[1792]: time="2026-03-07T01:26:03.467391807Z" level=info msg="StartContainer for \"2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1\"" Mar 7 01:26:03.522964 containerd[1792]: time="2026-03-07T01:26:03.522494853Z" level=info msg="StartContainer for \"2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1\" returns successfully" Mar 7 01:26:04.091316 kubelet[3349]: E0307 01:26:04.090591 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:04.218758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1-rootfs.mount: Deactivated successfully. Mar 7 01:26:04.950964 containerd[1792]: time="2026-03-07T01:26:04.950909586Z" level=info msg="shim disconnected" id=2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1 namespace=k8s.io Mar 7 01:26:04.951521 containerd[1792]: time="2026-03-07T01:26:04.951374545Z" level=warning msg="cleaning up after shim disconnected" id=2220fad7eb067c0258547c7aa120b51239b859cd09d81623e8587e004a6fa1e1 namespace=k8s.io Mar 7 01:26:04.951521 containerd[1792]: time="2026-03-07T01:26:04.951395785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:05.177507 containerd[1792]: time="2026-03-07T01:26:05.177471265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:26:06.091394 kubelet[3349]: E0307 01:26:06.091022 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:07.362500 containerd[1792]: time="2026-03-07T01:26:07.362455724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:07.366725 containerd[1792]: time="2026-03-07T01:26:07.366587358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 01:26:07.371183 containerd[1792]: time="2026-03-07T01:26:07.370223033Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:07.374808 containerd[1792]: time="2026-03-07T01:26:07.374764027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:07.375615 containerd[1792]: time="2026-03-07T01:26:07.375525666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.198011642s" Mar 7 01:26:07.375615 containerd[1792]: time="2026-03-07T01:26:07.375553666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 01:26:07.384650 containerd[1792]: time="2026-03-07T01:26:07.384482893Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:26:07.416520 containerd[1792]: time="2026-03-07T01:26:07.416415208Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99\"" Mar 7 01:26:07.418237 containerd[1792]: time="2026-03-07T01:26:07.418138445Z" level=info msg="StartContainer for \"faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99\"" Mar 7 01:26:07.467530 containerd[1792]: time="2026-03-07T01:26:07.467487415Z" level=info msg="StartContainer for \"faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99\" returns successfully" Mar 7 01:26:08.092498 kubelet[3349]: E0307 01:26:08.091570 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:09.551112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99-rootfs.mount: Deactivated successfully. Mar 7 01:26:09.560190 kubelet[3349]: I0307 01:26:09.559378 3349 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:26:09.562847 containerd[1792]: time="2026-03-07T01:26:09.562683402Z" level=info msg="shim disconnected" id=faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99 namespace=k8s.io Mar 7 01:26:09.562847 containerd[1792]: time="2026-03-07T01:26:09.562732562Z" level=warning msg="cleaning up after shim disconnected" id=faeba6f139aaa05a709ce64871852c35bff54fde0932cde80e431b20ea630f99 namespace=k8s.io Mar 7 01:26:09.562847 containerd[1792]: time="2026-03-07T01:26:09.562740162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:09.668871 kubelet[3349]: I0307 01:26:09.668216 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:26:09.701942 kubelet[3349]: I0307 01:26:09.701826 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc35d0d5-b5ae-4bde-a7b6-5af37327d945-goldmane-key-pair\") pod \"goldmane-5b85766d88-nsg2r\" (UID: \"cc35d0d5-b5ae-4bde-a7b6-5af37327d945\") " pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:09.701942 kubelet[3349]: I0307 01:26:09.701861 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/581cbdc8-dccd-4487-b131-103b5bf24401-calico-apiserver-certs\") pod \"calico-apiserver-6f47778fd4-2j8nd\" (UID: \"581cbdc8-dccd-4487-b131-103b5bf24401\") " pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" Mar 7 01:26:09.701942 kubelet[3349]: I0307 01:26:09.701891 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e548dd02-7656-4281-b324-543590d586b1-tigera-ca-bundle\") pod \"calico-kube-controllers-55c8d85db8-ctmcz\" (UID: \"e548dd02-7656-4281-b324-543590d586b1\") " pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" Mar 7 01:26:09.701942 kubelet[3349]: I0307 01:26:09.701907 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkpx8\" (UniqueName: \"kubernetes.io/projected/e51adf89-b01f-431e-8e84-4c2814fa6d45-kube-api-access-hkpx8\") pod \"calico-apiserver-6f47778fd4-w44bp\" (UID: \"e51adf89-b01f-431e-8e84-4c2814fa6d45\") " pod="calico-system/calico-apiserver-6f47778fd4-w44bp" Mar 7 01:26:09.701942 kubelet[3349]: I0307 01:26:09.701922 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc35d0d5-b5ae-4bde-a7b6-5af37327d945-config\") pod \"goldmane-5b85766d88-nsg2r\" (UID: \"cc35d0d5-b5ae-4bde-a7b6-5af37327d945\") " pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:09.702731 kubelet[3349]: I0307 01:26:09.702199 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a16e5b6-f499-4738-9cd6-aee9c02063e5-config-volume\") pod \"coredns-674b8bbfcf-l4xdw\" (UID: \"9a16e5b6-f499-4738-9cd6-aee9c02063e5\") " pod="kube-system/coredns-674b8bbfcf-l4xdw" Mar 7 01:26:09.702731 kubelet[3349]: I0307 01:26:09.702238 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-splnf\" (UniqueName: \"kubernetes.io/projected/e548dd02-7656-4281-b324-543590d586b1-kube-api-access-splnf\") pod \"calico-kube-controllers-55c8d85db8-ctmcz\" (UID: \"e548dd02-7656-4281-b324-543590d586b1\") " pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" Mar 7 01:26:09.702731 kubelet[3349]: I0307 01:26:09.702256 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gql7p\" (UniqueName: \"kubernetes.io/projected/198e2d2d-ab2f-4ba4-877d-5a441bf77609-kube-api-access-gql7p\") pod \"coredns-674b8bbfcf-458z5\" (UID: \"198e2d2d-ab2f-4ba4-877d-5a441bf77609\") " pod="kube-system/coredns-674b8bbfcf-458z5" Mar 7 01:26:09.702731 kubelet[3349]: I0307 01:26:09.702274 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvk4m\" (UniqueName: \"kubernetes.io/projected/cc35d0d5-b5ae-4bde-a7b6-5af37327d945-kube-api-access-qvk4m\") pod \"goldmane-5b85766d88-nsg2r\" (UID: \"cc35d0d5-b5ae-4bde-a7b6-5af37327d945\") " pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:09.702731 kubelet[3349]: I0307 01:26:09.702314 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvns4\" (UniqueName: \"kubernetes.io/projected/9a16e5b6-f499-4738-9cd6-aee9c02063e5-kube-api-access-zvns4\") pod \"coredns-674b8bbfcf-l4xdw\" (UID: \"9a16e5b6-f499-4738-9cd6-aee9c02063e5\") " pod="kube-system/coredns-674b8bbfcf-l4xdw" Mar 7 01:26:09.702904 kubelet[3349]: I0307 01:26:09.702332 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-nginx-config\") pod \"whisker-57d95b7b87-cv65p\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:09.702904 kubelet[3349]: I0307 01:26:09.702347 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc35d0d5-b5ae-4bde-a7b6-5af37327d945-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-nsg2r\" (UID: \"cc35d0d5-b5ae-4bde-a7b6-5af37327d945\") " pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:09.702904 kubelet[3349]: I0307 01:26:09.702364 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/198e2d2d-ab2f-4ba4-877d-5a441bf77609-config-volume\") pod \"coredns-674b8bbfcf-458z5\" (UID: \"198e2d2d-ab2f-4ba4-877d-5a441bf77609\") " pod="kube-system/coredns-674b8bbfcf-458z5" Mar 7 01:26:09.702904 kubelet[3349]: I0307 01:26:09.702379 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e51adf89-b01f-431e-8e84-4c2814fa6d45-calico-apiserver-certs\") pod \"calico-apiserver-6f47778fd4-w44bp\" (UID: \"e51adf89-b01f-431e-8e84-4c2814fa6d45\") " pod="calico-system/calico-apiserver-6f47778fd4-w44bp" Mar 7 01:26:09.702904 kubelet[3349]: I0307 01:26:09.702393 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngcr\" (UniqueName: \"kubernetes.io/projected/581cbdc8-dccd-4487-b131-103b5bf24401-kube-api-access-pngcr\") pod \"calico-apiserver-6f47778fd4-2j8nd\" (UID: \"581cbdc8-dccd-4487-b131-103b5bf24401\") " pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" Mar 7 01:26:09.703019 kubelet[3349]: I0307 01:26:09.702410 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-backend-key-pair\") pod \"whisker-57d95b7b87-cv65p\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:09.703019 kubelet[3349]: I0307 01:26:09.702427 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-ca-bundle\") pod \"whisker-57d95b7b87-cv65p\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:09.703019 kubelet[3349]: I0307 01:26:09.702441 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82shw\" (UniqueName: \"kubernetes.io/projected/90d18621-91ec-4cdc-aef0-b9e05c8877a0-kube-api-access-82shw\") pod \"whisker-57d95b7b87-cv65p\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:09.917966 containerd[1792]: time="2026-03-07T01:26:09.917929378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4xdw,Uid:9a16e5b6-f499-4738-9cd6-aee9c02063e5,Namespace:kube-system,Attempt:0,}" Mar 7 01:26:09.924374 containerd[1792]: time="2026-03-07T01:26:09.923496530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d95b7b87-cv65p,Uid:90d18621-91ec-4cdc-aef0-b9e05c8877a0,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:09.931142 containerd[1792]: time="2026-03-07T01:26:09.931117279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nsg2r,Uid:cc35d0d5-b5ae-4bde-a7b6-5af37327d945,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:09.942351 containerd[1792]: time="2026-03-07T01:26:09.942320704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-458z5,Uid:198e2d2d-ab2f-4ba4-877d-5a441bf77609,Namespace:kube-system,Attempt:0,}" Mar 7 01:26:09.942540 containerd[1792]: time="2026-03-07T01:26:09.942517623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8d85db8-ctmcz,Uid:e548dd02-7656-4281-b324-543590d586b1,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:09.952517 containerd[1792]: time="2026-03-07T01:26:09.952480649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-2j8nd,Uid:581cbdc8-dccd-4487-b131-103b5bf24401,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:09.952697 containerd[1792]: time="2026-03-07T01:26:09.952676409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-w44bp,Uid:e51adf89-b01f-431e-8e84-4c2814fa6d45,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:10.096423 containerd[1792]: time="2026-03-07T01:26:10.096179485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m29f2,Uid:d63cfcaa-807b-4baa-81f4-479a9dbfdd0f,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:10.097135 containerd[1792]: time="2026-03-07T01:26:10.097069164Z" level=error msg="Failed to destroy network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.097581 containerd[1792]: time="2026-03-07T01:26:10.097477803Z" level=error msg="encountered an error cleaning up failed sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.097581 containerd[1792]: time="2026-03-07T01:26:10.097537883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4xdw,Uid:9a16e5b6-f499-4738-9cd6-aee9c02063e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.097870 kubelet[3349]: E0307 01:26:10.097827 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.098319 kubelet[3349]: E0307 01:26:10.098121 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l4xdw" Mar 7 01:26:10.098319 kubelet[3349]: E0307 01:26:10.098158 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-l4xdw" Mar 7 01:26:10.098319 kubelet[3349]: E0307 01:26:10.098206 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-l4xdw_kube-system(9a16e5b6-f499-4738-9cd6-aee9c02063e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-l4xdw_kube-system(9a16e5b6-f499-4738-9cd6-aee9c02063e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l4xdw" podUID="9a16e5b6-f499-4738-9cd6-aee9c02063e5" Mar 7 01:26:10.107644 containerd[1792]: time="2026-03-07T01:26:10.107599509Z" level=error msg="Failed to destroy network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.107890 containerd[1792]: time="2026-03-07T01:26:10.107865749Z" level=error msg="encountered an error cleaning up failed sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.107929 containerd[1792]: time="2026-03-07T01:26:10.107911869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d95b7b87-cv65p,Uid:90d18621-91ec-4cdc-aef0-b9e05c8877a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.108114 kubelet[3349]: E0307 01:26:10.108081 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.108170 kubelet[3349]: E0307 01:26:10.108129 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:10.108170 kubelet[3349]: E0307 01:26:10.108152 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d95b7b87-cv65p" Mar 7 01:26:10.108221 kubelet[3349]: E0307 01:26:10.108195 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d95b7b87-cv65p_calico-system(90d18621-91ec-4cdc-aef0-b9e05c8877a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d95b7b87-cv65p_calico-system(90d18621-91ec-4cdc-aef0-b9e05c8877a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d95b7b87-cv65p" podUID="90d18621-91ec-4cdc-aef0-b9e05c8877a0" Mar 7 01:26:10.188165 kubelet[3349]: I0307 01:26:10.188067 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:10.191657 containerd[1792]: time="2026-03-07T01:26:10.191157910Z" level=info msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" Mar 7 01:26:10.191657 containerd[1792]: time="2026-03-07T01:26:10.191455390Z" level=info msg="Ensure that sandbox 8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a in task-service has been cleanup successfully" Mar 7 01:26:10.217350 kubelet[3349]: I0307 01:26:10.217006 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:10.219343 containerd[1792]: time="2026-03-07T01:26:10.218841511Z" level=info msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" Mar 7 01:26:10.222112 containerd[1792]: time="2026-03-07T01:26:10.220788708Z" level=info msg="Ensure that sandbox e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0 in task-service has been cleanup successfully" Mar 7 01:26:10.225125 containerd[1792]: time="2026-03-07T01:26:10.225011742Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:26:10.266756 containerd[1792]: time="2026-03-07T01:26:10.266712403Z" level=error msg="Failed to destroy network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.267115 containerd[1792]: time="2026-03-07T01:26:10.267055603Z" level=error msg="Failed to destroy network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.267932 containerd[1792]: time="2026-03-07T01:26:10.267903162Z" level=error msg="encountered an error cleaning up failed sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.268093 containerd[1792]: time="2026-03-07T01:26:10.268044561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-458z5,Uid:198e2d2d-ab2f-4ba4-877d-5a441bf77609,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.268811 kubelet[3349]: E0307 01:26:10.268453 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.268811 kubelet[3349]: E0307 01:26:10.268509 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-458z5" Mar 7 01:26:10.268811 kubelet[3349]: E0307 01:26:10.268527 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-458z5" Mar 7 01:26:10.269500 kubelet[3349]: E0307 01:26:10.268571 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-458z5_kube-system(198e2d2d-ab2f-4ba4-877d-5a441bf77609)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-458z5_kube-system(198e2d2d-ab2f-4ba4-877d-5a441bf77609)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-458z5" podUID="198e2d2d-ab2f-4ba4-877d-5a441bf77609" Mar 7 01:26:10.270895 containerd[1792]: time="2026-03-07T01:26:10.270540438Z" level=error msg="encountered an error cleaning up failed sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.271057 containerd[1792]: time="2026-03-07T01:26:10.270979477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nsg2r,Uid:cc35d0d5-b5ae-4bde-a7b6-5af37327d945,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.271823 kubelet[3349]: E0307 01:26:10.271707 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.271929 kubelet[3349]: E0307 01:26:10.271909 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:10.272602 kubelet[3349]: E0307 01:26:10.272556 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nsg2r" Mar 7 01:26:10.273138 kubelet[3349]: E0307 01:26:10.272882 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-nsg2r_calico-system(cc35d0d5-b5ae-4bde-a7b6-5af37327d945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-nsg2r_calico-system(cc35d0d5-b5ae-4bde-a7b6-5af37327d945)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-nsg2r" podUID="cc35d0d5-b5ae-4bde-a7b6-5af37327d945" Mar 7 01:26:10.297931 containerd[1792]: time="2026-03-07T01:26:10.296757961Z" level=info msg="CreateContainer within sandbox \"112169033f8f8e7e4c45971d9a22756c56832dc70da5610e2784e962e5e5b4b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5c8882f4369359ce9f1ee6ab7ae038c7989f5f0979459c9b91a968ac9540454\"" Mar 7 01:26:10.299142 containerd[1792]: time="2026-03-07T01:26:10.298259798Z" level=info msg="StartContainer for \"d5c8882f4369359ce9f1ee6ab7ae038c7989f5f0979459c9b91a968ac9540454\"" Mar 7 01:26:10.304668 containerd[1792]: time="2026-03-07T01:26:10.304635469Z" level=error msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" failed" error="failed to destroy network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.304879 kubelet[3349]: E0307 01:26:10.304840 3349 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:10.305071 kubelet[3349]: E0307 01:26:10.304969 3349 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a"} Mar 7 01:26:10.305071 kubelet[3349]: E0307 01:26:10.305024 3349 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:26:10.305071 kubelet[3349]: E0307 01:26:10.305045 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d95b7b87-cv65p" podUID="90d18621-91ec-4cdc-aef0-b9e05c8877a0" Mar 7 01:26:10.306516 containerd[1792]: time="2026-03-07T01:26:10.306479387Z" level=error msg="Failed to destroy network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.306838 containerd[1792]: time="2026-03-07T01:26:10.306798786Z" level=error msg="encountered an error cleaning up failed sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.306899 containerd[1792]: time="2026-03-07T01:26:10.306853866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8d85db8-ctmcz,Uid:e548dd02-7656-4281-b324-543590d586b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.309332 kubelet[3349]: E0307 01:26:10.307503 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.309332 kubelet[3349]: E0307 01:26:10.307540 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" Mar 7 01:26:10.309332 kubelet[3349]: E0307 01:26:10.307556 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" Mar 7 01:26:10.309462 kubelet[3349]: E0307 01:26:10.307601 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c8d85db8-ctmcz_calico-system(e548dd02-7656-4281-b324-543590d586b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c8d85db8-ctmcz_calico-system(e548dd02-7656-4281-b324-543590d586b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" podUID="e548dd02-7656-4281-b324-543590d586b1" Mar 7 01:26:10.327976 containerd[1792]: time="2026-03-07T01:26:10.327930076Z" level=error msg="Failed to destroy network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.332486 containerd[1792]: time="2026-03-07T01:26:10.332451510Z" level=error msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" failed" error="failed to destroy network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.332723 kubelet[3349]: E0307 01:26:10.332666 3349 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:10.332859 kubelet[3349]: E0307 01:26:10.332840 3349 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0"} Mar 7 01:26:10.332945 kubelet[3349]: E0307 01:26:10.332932 3349 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a16e5b6-f499-4738-9cd6-aee9c02063e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:26:10.333048 kubelet[3349]: E0307 01:26:10.333030 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a16e5b6-f499-4738-9cd6-aee9c02063e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-l4xdw" podUID="9a16e5b6-f499-4738-9cd6-aee9c02063e5" Mar 7 01:26:10.338093 containerd[1792]: time="2026-03-07T01:26:10.338059742Z" level=error msg="Failed to destroy network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.338481 containerd[1792]: time="2026-03-07T01:26:10.338421542Z" level=error msg="encountered an error cleaning up failed sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.338607 containerd[1792]: time="2026-03-07T01:26:10.338577141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-2j8nd,Uid:581cbdc8-dccd-4487-b131-103b5bf24401,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.338831 kubelet[3349]: E0307 01:26:10.338809 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.338941 kubelet[3349]: E0307 01:26:10.338925 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" Mar 7 01:26:10.339009 kubelet[3349]: E0307 01:26:10.338993 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" Mar 7 01:26:10.339107 kubelet[3349]: E0307 01:26:10.339086 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f47778fd4-2j8nd_calico-system(581cbdc8-dccd-4487-b131-103b5bf24401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f47778fd4-2j8nd_calico-system(581cbdc8-dccd-4487-b131-103b5bf24401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" podUID="581cbdc8-dccd-4487-b131-103b5bf24401" Mar 7 01:26:10.346173 containerd[1792]: time="2026-03-07T01:26:10.346127251Z" level=error msg="encountered an error cleaning up failed sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.346252 containerd[1792]: time="2026-03-07T01:26:10.346187370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-w44bp,Uid:e51adf89-b01f-431e-8e84-4c2814fa6d45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.347046 kubelet[3349]: E0307 01:26:10.346393 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.347046 kubelet[3349]: E0307 01:26:10.346941 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f47778fd4-w44bp" Mar 7 01:26:10.347046 kubelet[3349]: E0307 01:26:10.346960 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f47778fd4-w44bp" Mar 7 01:26:10.347163 kubelet[3349]: E0307 01:26:10.347012 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f47778fd4-w44bp_calico-system(e51adf89-b01f-431e-8e84-4c2814fa6d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f47778fd4-w44bp_calico-system(e51adf89-b01f-431e-8e84-4c2814fa6d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f47778fd4-w44bp" podUID="e51adf89-b01f-431e-8e84-4c2814fa6d45" Mar 7 01:26:10.355358 containerd[1792]: time="2026-03-07T01:26:10.355330238Z" level=error msg="Failed to destroy network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.355699 containerd[1792]: time="2026-03-07T01:26:10.355675037Z" level=error msg="encountered an error cleaning up failed sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.355815 containerd[1792]: time="2026-03-07T01:26:10.355793757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m29f2,Uid:d63cfcaa-807b-4baa-81f4-479a9dbfdd0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.356072 kubelet[3349]: E0307 01:26:10.356017 3349 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:26:10.356200 kubelet[3349]: E0307 01:26:10.356174 3349 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m29f2" Mar 7 01:26:10.356280 kubelet[3349]: E0307 01:26:10.356255 3349 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m29f2" Mar 7 01:26:10.356513 kubelet[3349]: E0307 01:26:10.356486 3349 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m29f2_calico-system(d63cfcaa-807b-4baa-81f4-479a9dbfdd0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m29f2_calico-system(d63cfcaa-807b-4baa-81f4-479a9dbfdd0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m29f2" podUID="d63cfcaa-807b-4baa-81f4-479a9dbfdd0f" Mar 7 01:26:10.384508 containerd[1792]: time="2026-03-07T01:26:10.384450436Z" level=info msg="StartContainer for \"d5c8882f4369359ce9f1ee6ab7ae038c7989f5f0979459c9b91a968ac9540454\" returns successfully" Mar 7 01:26:11.218772 kubelet[3349]: I0307 01:26:11.218743 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:11.220501 containerd[1792]: time="2026-03-07T01:26:11.219294652Z" level=info msg="StopPodSandbox for \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\"" Mar 7 01:26:11.221340 containerd[1792]: time="2026-03-07T01:26:11.220702290Z" level=info msg="Ensure that sandbox 2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a in task-service has been cleanup successfully" Mar 7 01:26:11.221374 kubelet[3349]: I0307 01:26:11.221331 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:11.222620 containerd[1792]: time="2026-03-07T01:26:11.222593927Z" level=info msg="StopPodSandbox for \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\"" Mar 7 01:26:11.222768 containerd[1792]: time="2026-03-07T01:26:11.222739327Z" level=info msg="Ensure that sandbox d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c in task-service has been cleanup successfully" Mar 7 01:26:11.223057 kubelet[3349]: I0307 01:26:11.223036 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:11.223881 containerd[1792]: time="2026-03-07T01:26:11.223553566Z" level=info msg="StopPodSandbox for \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\"" Mar 7 01:26:11.223881 containerd[1792]: time="2026-03-07T01:26:11.223687405Z" level=info msg="Ensure that sandbox b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398 in task-service has been cleanup successfully" Mar 7 01:26:11.227661 kubelet[3349]: I0307 01:26:11.227622 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:11.228502 containerd[1792]: time="2026-03-07T01:26:11.228477239Z" level=info msg="StopPodSandbox for \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\"" Mar 7 01:26:11.228624 containerd[1792]: time="2026-03-07T01:26:11.228603518Z" level=info msg="Ensure that sandbox 8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a in task-service has been cleanup successfully" Mar 7 01:26:11.230392 kubelet[3349]: I0307 01:26:11.230243 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:11.233205 containerd[1792]: time="2026-03-07T01:26:11.233181752Z" level=info msg="StopPodSandbox for \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\"" Mar 7 01:26:11.233627 containerd[1792]: time="2026-03-07T01:26:11.233444832Z" level=info msg="Ensure that sandbox 4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64 in task-service has been cleanup successfully" Mar 7 01:26:11.266409 kubelet[3349]: I0307 01:26:11.266345 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fk6gd" podStartSLOduration=4.975585099 podStartE2EDuration="18.266245265s" podCreationTimestamp="2026-03-07 01:25:53 +0000 UTC" firstStartedPulling="2026-03-07 01:25:54.086036138 +0000 UTC m=+22.140792468" lastFinishedPulling="2026-03-07 01:26:07.376696264 +0000 UTC m=+35.431452634" observedRunningTime="2026-03-07 01:26:11.263259149 +0000 UTC m=+39.318015479" watchObservedRunningTime="2026-03-07 01:26:11.266245265 +0000 UTC m=+39.321001595" Mar 7 01:26:11.268836 kubelet[3349]: I0307 01:26:11.268815 3349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:11.273155 containerd[1792]: time="2026-03-07T01:26:11.273115335Z" level=info msg="StopPodSandbox for \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\"" Mar 7 01:26:11.275620 containerd[1792]: time="2026-03-07T01:26:11.275551692Z" level=info msg="Ensure that sandbox 6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8 in task-service has been cleanup successfully" Mar 7 01:26:11.275887 containerd[1792]: time="2026-03-07T01:26:11.275816011Z" level=info msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.373 [INFO][4523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.373 [INFO][4523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" iface="eth0" netns="/var/run/netns/cni-cce71fdf-44b5-3fb8-03e0-96738ce2b2b4" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.374 [INFO][4523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" iface="eth0" netns="/var/run/netns/cni-cce71fdf-44b5-3fb8-03e0-96738ce2b2b4" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.376 [INFO][4523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" iface="eth0" netns="/var/run/netns/cni-cce71fdf-44b5-3fb8-03e0-96738ce2b2b4" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.376 [INFO][4523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.378 [INFO][4523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.493 [INFO][4624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.497 [INFO][4624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.497 [INFO][4624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.518 [WARNING][4624] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.518 [INFO][4624] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.522 [INFO][4624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.540522 containerd[1792]: 2026-03-07 01:26:11.526 [INFO][4523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:11.547006 containerd[1792]: time="2026-03-07T01:26:11.546770070Z" level=info msg="TearDown network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" successfully" Mar 7 01:26:11.547006 containerd[1792]: time="2026-03-07T01:26:11.546806870Z" level=info msg="StopPodSandbox for \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" returns successfully" Mar 7 01:26:11.547543 systemd[1]: run-netns-cni\x2dcce71fdf\x2d44b5\x2d3fb8\x2d03e0\x2d96738ce2b2b4.mount: Deactivated successfully. Mar 7 01:26:11.549491 containerd[1792]: time="2026-03-07T01:26:11.548493868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-458z5,Uid:198e2d2d-ab2f-4ba4-877d-5a441bf77609,Namespace:kube-system,Attempt:1,}" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.485 [INFO][4567] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.485 [INFO][4567] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" iface="eth0" netns="/var/run/netns/cni-b4de30b1-cf54-08c8-2381-2b25697bf32e" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.488 [INFO][4567] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" iface="eth0" netns="/var/run/netns/cni-b4de30b1-cf54-08c8-2381-2b25697bf32e" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.488 [INFO][4567] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" iface="eth0" netns="/var/run/netns/cni-b4de30b1-cf54-08c8-2381-2b25697bf32e" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.488 [INFO][4567] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.488 [INFO][4567] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.535 [INFO][4649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.535 [INFO][4649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.535 [INFO][4649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.552 [WARNING][4649] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.552 [INFO][4649] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.553 [INFO][4649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.566340 containerd[1792]: 2026-03-07 01:26:11.561 [INFO][4567] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:11.570603 containerd[1792]: time="2026-03-07T01:26:11.569283680Z" level=info msg="TearDown network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" successfully" Mar 7 01:26:11.570603 containerd[1792]: time="2026-03-07T01:26:11.569324800Z" level=info msg="StopPodSandbox for \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" returns successfully" Mar 7 01:26:11.572006 systemd[1]: run-netns-cni\x2db4de30b1\x2dcf54\x2d08c8\x2d2381\x2d2b25697bf32e.mount: Deactivated successfully. Mar 7 01:26:11.575760 containerd[1792]: time="2026-03-07T01:26:11.575643552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nsg2r,Uid:cc35d0d5-b5ae-4bde-a7b6-5af37327d945,Namespace:calico-system,Attempt:1,}" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.413 [INFO][4555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.414 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" iface="eth0" netns="/var/run/netns/cni-c4af89b4-4efe-d204-455c-f160755cedc8" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.414 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" iface="eth0" netns="/var/run/netns/cni-c4af89b4-4efe-d204-455c-f160755cedc8" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.414 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" iface="eth0" netns="/var/run/netns/cni-c4af89b4-4efe-d204-455c-f160755cedc8" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.414 [INFO][4555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.414 [INFO][4555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.583 [INFO][4630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.584 [INFO][4630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.584 [INFO][4630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.608 [WARNING][4630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.608 [INFO][4630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.609 [INFO][4630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.624198 containerd[1792]: 2026-03-07 01:26:11.620 [INFO][4555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:11.626570 containerd[1792]: time="2026-03-07T01:26:11.626527843Z" level=info msg="TearDown network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" successfully" Mar 7 01:26:11.626650 containerd[1792]: time="2026-03-07T01:26:11.626636763Z" level=info msg="StopPodSandbox for \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" returns successfully" Mar 7 01:26:11.629471 containerd[1792]: time="2026-03-07T01:26:11.628748000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8d85db8-ctmcz,Uid:e548dd02-7656-4281-b324-543590d586b1,Namespace:calico-system,Attempt:1,}" Mar 7 01:26:11.628976 systemd[1]: run-netns-cni\x2dc4af89b4\x2d4efe\x2dd204\x2d455c\x2df160755cedc8.mount: Deactivated successfully. Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.418 [INFO][4541] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.420 [INFO][4541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" iface="eth0" netns="/var/run/netns/cni-d28143a0-ca59-00c3-cf0f-af263b001a3d" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.422 [INFO][4541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" iface="eth0" netns="/var/run/netns/cni-d28143a0-ca59-00c3-cf0f-af263b001a3d" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.423 [INFO][4541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" iface="eth0" netns="/var/run/netns/cni-d28143a0-ca59-00c3-cf0f-af263b001a3d" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.423 [INFO][4541] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.423 [INFO][4541] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.586 [INFO][4634] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.586 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.610 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.628 [WARNING][4634] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.628 [INFO][4634] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.630 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.656634 containerd[1792]: 2026-03-07 01:26:11.647 [INFO][4541] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:11.658245 containerd[1792]: time="2026-03-07T01:26:11.656833043Z" level=info msg="TearDown network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" successfully" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.449 [INFO][4549] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.449 [INFO][4549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" iface="eth0" netns="/var/run/netns/cni-1a46a9fd-1b14-1688-dfad-bffeb7572601" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.449 [INFO][4549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" iface="eth0" netns="/var/run/netns/cni-1a46a9fd-1b14-1688-dfad-bffeb7572601" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.449 [INFO][4549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" iface="eth0" netns="/var/run/netns/cni-1a46a9fd-1b14-1688-dfad-bffeb7572601" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.450 [INFO][4549] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.450 [INFO][4549] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.646 [INFO][4642] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.646 [INFO][4642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.647 [INFO][4642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.658 [WARNING][4642] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.658 [INFO][4642] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.661 [INFO][4642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.671220 containerd[1792]: 2026-03-07 01:26:11.664 [INFO][4549] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:11.673667 containerd[1792]: time="2026-03-07T01:26:11.656857283Z" level=info msg="StopPodSandbox for \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" returns successfully" Mar 7 01:26:11.673743 containerd[1792]: time="2026-03-07T01:26:11.671408903Z" level=info msg="TearDown network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" successfully" Mar 7 01:26:11.673743 containerd[1792]: time="2026-03-07T01:26:11.673733780Z" level=info msg="StopPodSandbox for \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" returns successfully" Mar 7 01:26:11.674553 containerd[1792]: time="2026-03-07T01:26:11.674367139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-2j8nd,Uid:581cbdc8-dccd-4487-b131-103b5bf24401,Namespace:calico-system,Attempt:1,}" Mar 7 01:26:11.676723 containerd[1792]: time="2026-03-07T01:26:11.676692216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m29f2,Uid:d63cfcaa-807b-4baa-81f4-479a9dbfdd0f,Namespace:calico-system,Attempt:1,}" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.542 [INFO][4593] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.544 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" iface="eth0" netns="/var/run/netns/cni-5f2a4765-d61c-6601-6e67-ce5d019dbfc6" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.544 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" iface="eth0" netns="/var/run/netns/cni-5f2a4765-d61c-6601-6e67-ce5d019dbfc6" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.545 [INFO][4593] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" iface="eth0" netns="/var/run/netns/cni-5f2a4765-d61c-6601-6e67-ce5d019dbfc6" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.546 [INFO][4593] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.546 [INFO][4593] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.662 [INFO][4660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.664 [INFO][4660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.679 [INFO][4660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.695 [WARNING][4660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.695 [INFO][4660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.700 [INFO][4660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.714814 containerd[1792]: 2026-03-07 01:26:11.705 [INFO][4593] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:11.731450 containerd[1792]: time="2026-03-07T01:26:11.731055343Z" level=info msg="TearDown network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" successfully" Mar 7 01:26:11.731450 containerd[1792]: time="2026-03-07T01:26:11.731094943Z" level=info msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" returns successfully" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.575 [INFO][4592] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.576 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" iface="eth0" netns="/var/run/netns/cni-5025269c-8884-5c37-9087-730ae4649d4a" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.576 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" iface="eth0" netns="/var/run/netns/cni-5025269c-8884-5c37-9087-730ae4649d4a" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.576 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" iface="eth0" netns="/var/run/netns/cni-5025269c-8884-5c37-9087-730ae4649d4a" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.576 [INFO][4592] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.576 [INFO][4592] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.694 [INFO][4667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.695 [INFO][4667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.700 [INFO][4667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.718 [WARNING][4667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.718 [INFO][4667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.720 [INFO][4667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:11.756951 containerd[1792]: 2026-03-07 01:26:11.727 [INFO][4592] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:11.757929 containerd[1792]: time="2026-03-07T01:26:11.757789867Z" level=info msg="TearDown network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" successfully" Mar 7 01:26:11.757929 containerd[1792]: time="2026-03-07T01:26:11.757821307Z" level=info msg="StopPodSandbox for \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" returns successfully" Mar 7 01:26:11.757929 containerd[1792]: time="2026-03-07T01:26:11.758909426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-w44bp,Uid:e51adf89-b01f-431e-8e84-4c2814fa6d45,Namespace:calico-system,Attempt:1,}" Mar 7 01:26:11.818398 kubelet[3349]: I0307 01:26:11.818207 3349 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-nginx-config\") pod \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " Mar 7 01:26:11.820468 kubelet[3349]: I0307 01:26:11.820343 3349 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-backend-key-pair\") pod \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " Mar 7 01:26:11.820468 kubelet[3349]: I0307 01:26:11.820381 3349 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82shw\" (UniqueName: \"kubernetes.io/projected/90d18621-91ec-4cdc-aef0-b9e05c8877a0-kube-api-access-82shw\") pod \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " Mar 7 01:26:11.820468 kubelet[3349]: I0307 01:26:11.820407 3349 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-ca-bundle\") pod \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\" (UID: \"90d18621-91ec-4cdc-aef0-b9e05c8877a0\") " Mar 7 01:26:11.820850 kubelet[3349]: I0307 01:26:11.820755 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "90d18621-91ec-4cdc-aef0-b9e05c8877a0" (UID: "90d18621-91ec-4cdc-aef0-b9e05c8877a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:26:11.823039 kubelet[3349]: I0307 01:26:11.822962 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "90d18621-91ec-4cdc-aef0-b9e05c8877a0" (UID: "90d18621-91ec-4cdc-aef0-b9e05c8877a0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:26:11.881547 kubelet[3349]: I0307 01:26:11.881497 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d18621-91ec-4cdc-aef0-b9e05c8877a0-kube-api-access-82shw" (OuterVolumeSpecName: "kube-api-access-82shw") pod "90d18621-91ec-4cdc-aef0-b9e05c8877a0" (UID: "90d18621-91ec-4cdc-aef0-b9e05c8877a0"). InnerVolumeSpecName "kube-api-access-82shw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:26:11.900601 kubelet[3349]: I0307 01:26:11.900535 3349 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "90d18621-91ec-4cdc-aef0-b9e05c8877a0" (UID: "90d18621-91ec-4cdc-aef0-b9e05c8877a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:26:11.921495 kubelet[3349]: I0307 01:26:11.921172 3349 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-ca-bundle\") on node \"ci-4081.3.6-n-0072e04abc\" DevicePath \"\"" Mar 7 01:26:11.921495 kubelet[3349]: I0307 01:26:11.921207 3349 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/90d18621-91ec-4cdc-aef0-b9e05c8877a0-nginx-config\") on node \"ci-4081.3.6-n-0072e04abc\" DevicePath \"\"" Mar 7 01:26:11.921495 kubelet[3349]: I0307 01:26:11.921219 3349 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90d18621-91ec-4cdc-aef0-b9e05c8877a0-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-0072e04abc\" DevicePath \"\"" Mar 7 01:26:11.921495 kubelet[3349]: I0307 01:26:11.921228 3349 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82shw\" (UniqueName: \"kubernetes.io/projected/90d18621-91ec-4cdc-aef0-b9e05c8877a0-kube-api-access-82shw\") on node \"ci-4081.3.6-n-0072e04abc\" DevicePath \"\"" Mar 7 01:26:12.086530 systemd-networkd[1380]: cali7486af25328: Link UP Mar 7 01:26:12.088690 systemd-networkd[1380]: cali7486af25328: Gained carrier Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.670 [ERROR][4675] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.699 [INFO][4675] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0 coredns-674b8bbfcf- kube-system 198e2d2d-ab2f-4ba4-877d-5a441bf77609 921 0 2026-03-07 01:25:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc coredns-674b8bbfcf-458z5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7486af25328 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.699 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.953 [INFO][4713] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" HandleID="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.973 [INFO][4713] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" HandleID="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"coredns-674b8bbfcf-458z5", "timestamp":"2026-03-07 01:26:11.953833684 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a0840)} Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.973 [INFO][4713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.973 [INFO][4713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.973 [INFO][4713] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.975 [INFO][4713] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.979 [INFO][4713] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.990 [INFO][4713] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.993 [INFO][4713] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.997 [INFO][4713] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:11.997 [INFO][4713] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.000 [INFO][4713] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5 Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.008 [INFO][4713] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.015 [INFO][4713] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.1/26] block=192.168.26.0/26 handle="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.015 [INFO][4713] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.1/26] handle="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.015 [INFO][4713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.179448 containerd[1792]: 2026-03-07 01:26:12.016 [INFO][4713] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.1/26] IPv6=[] ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" HandleID="k8s-pod-network.bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.050 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"198e2d2d-ab2f-4ba4-877d-5a441bf77609", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"coredns-674b8bbfcf-458z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7486af25328", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.052 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.1/32] ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.052 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7486af25328 ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.093 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.106 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"198e2d2d-ab2f-4ba4-877d-5a441bf77609", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5", Pod:"coredns-674b8bbfcf-458z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7486af25328", MAC:"1a:ed:a4:f3:fc:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.180043 containerd[1792]: 2026-03-07 01:26:12.165 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5" Namespace="kube-system" Pod="coredns-674b8bbfcf-458z5" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:12.278423 systemd-networkd[1380]: cali9d3dd54da08: Link UP Mar 7 01:26:12.278637 systemd-networkd[1380]: cali9d3dd54da08: Gained carrier Mar 7 01:26:12.296069 containerd[1792]: time="2026-03-07T01:26:12.291087951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:12.296069 containerd[1792]: time="2026-03-07T01:26:12.291151111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:12.296069 containerd[1792]: time="2026-03-07T01:26:12.291162791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.296069 containerd[1792]: time="2026-03-07T01:26:12.291251631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:11.776 [ERROR][4687] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:11.827 [INFO][4687] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0 goldmane-5b85766d88- calico-system cc35d0d5-b5ae-4bde-a7b6-5af37327d945 925 0 2026-03-07 01:25:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc goldmane-5b85766d88-nsg2r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9d3dd54da08 [] [] }} ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:11.827 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.100 [INFO][4832] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" HandleID="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.169 [INFO][4832] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" HandleID="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"goldmane-5b85766d88-nsg2r", "timestamp":"2026-03-07 01:26:12.100306487 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001842c0)} Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.169 [INFO][4832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.169 [INFO][4832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.169 [INFO][4832] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.182 [INFO][4832] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.197 [INFO][4832] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.202 [INFO][4832] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.206 [INFO][4832] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.212 [INFO][4832] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.212 [INFO][4832] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.214 [INFO][4832] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81 Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.221 [INFO][4832] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.231 [INFO][4832] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.2/26] block=192.168.26.0/26 handle="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.231 [INFO][4832] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.2/26] handle="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.234 [INFO][4832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.318022 containerd[1792]: 2026-03-07 01:26:12.234 [INFO][4832] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.2/26] IPv6=[] ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" HandleID="k8s-pod-network.2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.246 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cc35d0d5-b5ae-4bde-a7b6-5af37327d945", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"goldmane-5b85766d88-nsg2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9d3dd54da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.246 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.2/32] ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.246 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d3dd54da08 ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.277 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.280 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cc35d0d5-b5ae-4bde-a7b6-5af37327d945", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81", Pod:"goldmane-5b85766d88-nsg2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9d3dd54da08", MAC:"ca:ed:04:79:23:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.318922 containerd[1792]: 2026-03-07 01:26:12.298 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81" Namespace="calico-system" Pod="goldmane-5b85766d88-nsg2r" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:12.427565 kubelet[3349]: I0307 01:26:12.426381 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8f7\" (UniqueName: \"kubernetes.io/projected/c12680d6-9928-4db9-850a-1fb0fcd2d7f9-kube-api-access-dd8f7\") pod \"whisker-68bb66c496-l98z4\" (UID: \"c12680d6-9928-4db9-850a-1fb0fcd2d7f9\") " pod="calico-system/whisker-68bb66c496-l98z4" Mar 7 01:26:12.427565 kubelet[3349]: I0307 01:26:12.426423 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c12680d6-9928-4db9-850a-1fb0fcd2d7f9-whisker-backend-key-pair\") pod \"whisker-68bb66c496-l98z4\" (UID: \"c12680d6-9928-4db9-850a-1fb0fcd2d7f9\") " pod="calico-system/whisker-68bb66c496-l98z4" Mar 7 01:26:12.427565 kubelet[3349]: I0307 01:26:12.426443 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12680d6-9928-4db9-850a-1fb0fcd2d7f9-whisker-ca-bundle\") pod \"whisker-68bb66c496-l98z4\" (UID: \"c12680d6-9928-4db9-850a-1fb0fcd2d7f9\") " pod="calico-system/whisker-68bb66c496-l98z4" Mar 7 01:26:12.427565 kubelet[3349]: I0307 01:26:12.426460 3349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c12680d6-9928-4db9-850a-1fb0fcd2d7f9-nginx-config\") pod \"whisker-68bb66c496-l98z4\" (UID: \"c12680d6-9928-4db9-850a-1fb0fcd2d7f9\") " pod="calico-system/whisker-68bb66c496-l98z4" Mar 7 01:26:12.433997 systemd-networkd[1380]: cali9c208ebf924: Link UP Mar 7 01:26:12.437212 systemd-networkd[1380]: cali9c208ebf924: Gained carrier Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.013 [ERROR][4768] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.047 [INFO][4768] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0 csi-node-driver- calico-system d63cfcaa-807b-4baa-81f4-479a9dbfdd0f 922 0 2026-03-07 01:25:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc csi-node-driver-m29f2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c208ebf924 [] [] }} ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.049 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.178 [INFO][4860] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" HandleID="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.198 [INFO][4860] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" HandleID="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"csi-node-driver-m29f2", "timestamp":"2026-03-07 01:26:12.178404782 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400013c000)} Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.198 [INFO][4860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.234 [INFO][4860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.234 [INFO][4860] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.275 [INFO][4860] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.301 [INFO][4860] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.313 [INFO][4860] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.317 [INFO][4860] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.323 [INFO][4860] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.323 [INFO][4860] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.327 [INFO][4860] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8 Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.334 [INFO][4860] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.358 [INFO][4860] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.3/26] block=192.168.26.0/26 handle="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.359 [INFO][4860] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.3/26] handle="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.359 [INFO][4860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.496286 containerd[1792]: 2026-03-07 01:26:12.359 [INFO][4860] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.3/26] IPv6=[] ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" HandleID="k8s-pod-network.a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.418 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"csi-node-driver-m29f2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c208ebf924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.420 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.3/32] ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.420 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c208ebf924 ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.441 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.456 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8", Pod:"csi-node-driver-m29f2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c208ebf924", MAC:"0e:26:05:30:a6:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.498377 containerd[1792]: 2026-03-07 01:26:12.484 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8" Namespace="calico-system" Pod="csi-node-driver-m29f2" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:12.550927 containerd[1792]: time="2026-03-07T01:26:12.523391959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:12.550927 containerd[1792]: time="2026-03-07T01:26:12.523437839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:12.550927 containerd[1792]: time="2026-03-07T01:26:12.523448039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.550927 containerd[1792]: time="2026-03-07T01:26:12.523523199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.571166 systemd-networkd[1380]: calib095b0f40c2: Link UP Mar 7 01:26:12.571618 systemd-networkd[1380]: calib095b0f40c2: Gained carrier Mar 7 01:26:12.586177 systemd[1]: run-netns-cni\x2dd28143a0\x2dca59\x2d00c3\x2dcf0f\x2daf263b001a3d.mount: Deactivated successfully. Mar 7 01:26:12.586738 systemd[1]: run-netns-cni\x2d5025269c\x2d8884\x2d5c37\x2d9087\x2d730ae4649d4a.mount: Deactivated successfully. Mar 7 01:26:12.586973 systemd[1]: run-netns-cni\x2d1a46a9fd\x2d1b14\x2d1688\x2ddfad\x2dbffeb7572601.mount: Deactivated successfully. Mar 7 01:26:12.587059 systemd[1]: run-netns-cni\x2d5f2a4765\x2dd61c\x2d6601\x2d6e67\x2dce5d019dbfc6.mount: Deactivated successfully. Mar 7 01:26:12.587129 systemd[1]: var-lib-kubelet-pods-90d18621\x2d91ec\x2d4cdc\x2daef0\x2db9e05c8877a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d82shw.mount: Deactivated successfully. Mar 7 01:26:12.587210 systemd[1]: var-lib-kubelet-pods-90d18621\x2d91ec\x2d4cdc\x2daef0\x2db9e05c8877a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:11.754 [ERROR][4714] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:11.802 [INFO][4714] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0 calico-kube-controllers-55c8d85db8- calico-system e548dd02-7656-4281-b324-543590d586b1 923 0 2026-03-07 01:25:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c8d85db8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc calico-kube-controllers-55c8d85db8-ctmcz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib095b0f40c2 [] [] }} ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:11.802 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.192 [INFO][4785] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" HandleID="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.217 [INFO][4785] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" HandleID="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400020e440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"calico-kube-controllers-55c8d85db8-ctmcz", "timestamp":"2026-03-07 01:26:12.192925043 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010a420)} Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.217 [INFO][4785] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.359 [INFO][4785] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.359 [INFO][4785] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.381 [INFO][4785] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.418 [INFO][4785] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.443 [INFO][4785] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.462 [INFO][4785] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.472 [INFO][4785] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.472 [INFO][4785] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.475 [INFO][4785] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.487 [INFO][4785] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.507 [INFO][4785] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.4/26] block=192.168.26.0/26 handle="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.507 [INFO][4785] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.4/26] handle="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.507 [INFO][4785] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.619257 containerd[1792]: 2026-03-07 01:26:12.507 [INFO][4785] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.4/26] IPv6=[] ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" HandleID="k8s-pod-network.0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.545 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0", GenerateName:"calico-kube-controllers-55c8d85db8-", Namespace:"calico-system", SelfLink:"", UID:"e548dd02-7656-4281-b324-543590d586b1", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8d85db8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"calico-kube-controllers-55c8d85db8-ctmcz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib095b0f40c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.545 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.4/32] ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.545 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib095b0f40c2 ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.571 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.588 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0", GenerateName:"calico-kube-controllers-55c8d85db8-", Namespace:"calico-system", SelfLink:"", UID:"e548dd02-7656-4281-b324-543590d586b1", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8d85db8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b", Pod:"calico-kube-controllers-55c8d85db8-ctmcz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib095b0f40c2", MAC:"ea:39:ed:d8:c6:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.620208 containerd[1792]: 2026-03-07 01:26:12.611 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b" Namespace="calico-system" Pod="calico-kube-controllers-55c8d85db8-ctmcz" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:12.644568 systemd-networkd[1380]: cali65d601c98f0: Link UP Mar 7 01:26:12.645481 systemd-networkd[1380]: cali65d601c98f0: Gained carrier Mar 7 01:26:12.654065 containerd[1792]: time="2026-03-07T01:26:12.654036344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-458z5,Uid:198e2d2d-ab2f-4ba4-877d-5a441bf77609,Namespace:kube-system,Attempt:1,} returns sandbox id \"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5\"" Mar 7 01:26:12.657734 kernel: calico-node[4855]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:26:12.664533 containerd[1792]: time="2026-03-07T01:26:12.662693932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:12.664533 containerd[1792]: time="2026-03-07T01:26:12.662738292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:12.664533 containerd[1792]: time="2026-03-07T01:26:12.662748692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.664533 containerd[1792]: time="2026-03-07T01:26:12.662820132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.671329 containerd[1792]: time="2026-03-07T01:26:12.669900722Z" level=info msg="CreateContainer within sandbox \"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:26:12.697460 containerd[1792]: time="2026-03-07T01:26:12.697354445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bb66c496-l98z4,Uid:c12680d6-9928-4db9-850a-1fb0fcd2d7f9,Namespace:calico-system,Attempt:0,}" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.074 [ERROR][4757] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.175 [INFO][4757] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0 calico-apiserver-6f47778fd4- calico-system 581cbdc8-dccd-4487-b131-103b5bf24401 924 0 2026-03-07 01:25:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f47778fd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc calico-apiserver-6f47778fd4-2j8nd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali65d601c98f0 [] [] }} ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.175 [INFO][4757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.355 [INFO][4886] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" HandleID="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.418 [INFO][4886] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" HandleID="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000398070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"calico-apiserver-6f47778fd4-2j8nd", "timestamp":"2026-03-07 01:26:12.355062385 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c86e0)} Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.418 [INFO][4886] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.510 [INFO][4886] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.510 [INFO][4886] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.514 [INFO][4886] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.523 [INFO][4886] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.571 [INFO][4886] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.581 [INFO][4886] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.595 [INFO][4886] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.595 [INFO][4886] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.598 [INFO][4886] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.612 [INFO][4886] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.622 [INFO][4886] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.5/26] block=192.168.26.0/26 handle="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.622 [INFO][4886] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.5/26] handle="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.622 [INFO][4886] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.716859 containerd[1792]: 2026-03-07 01:26:12.622 [INFO][4886] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.5/26] IPv6=[] ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" HandleID="k8s-pod-network.4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.636 [INFO][4757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"581cbdc8-dccd-4487-b131-103b5bf24401", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"calico-apiserver-6f47778fd4-2j8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65d601c98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.636 [INFO][4757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.5/32] ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.636 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65d601c98f0 ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.650 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.661 [INFO][4757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"581cbdc8-dccd-4487-b131-103b5bf24401", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e", Pod:"calico-apiserver-6f47778fd4-2j8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65d601c98f0", MAC:"06:a4:4b:6d:5e:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.717560 containerd[1792]: 2026-03-07 01:26:12.685 [INFO][4757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-2j8nd" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:12.778478 containerd[1792]: time="2026-03-07T01:26:12.774890021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nsg2r,Uid:cc35d0d5-b5ae-4bde-a7b6-5af37327d945,Namespace:calico-system,Attempt:1,} returns sandbox id \"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81\"" Mar 7 01:26:12.796664 containerd[1792]: time="2026-03-07T01:26:12.795896993Z" level=info msg="CreateContainer within sandbox \"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5242df02d17e35cb24b64be2808f52094ad72a7aa477dfddc5503317d00aa650\"" Mar 7 01:26:12.807967 containerd[1792]: time="2026-03-07T01:26:12.807155458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:12.807967 containerd[1792]: time="2026-03-07T01:26:12.807209858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:12.807967 containerd[1792]: time="2026-03-07T01:26:12.807235618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.807967 containerd[1792]: time="2026-03-07T01:26:12.807468098Z" level=info msg="StartContainer for \"5242df02d17e35cb24b64be2808f52094ad72a7aa477dfddc5503317d00aa650\"" Mar 7 01:26:12.862037 containerd[1792]: time="2026-03-07T01:26:12.861399745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:26:12.866646 systemd-networkd[1380]: calie8e7d9796aa: Link UP Mar 7 01:26:12.869410 systemd-networkd[1380]: calie8e7d9796aa: Gained carrier Mar 7 01:26:12.870949 containerd[1792]: time="2026-03-07T01:26:12.811146133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.164 [ERROR][4793] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.189 [INFO][4793] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0 calico-apiserver-6f47778fd4- calico-system e51adf89-b01f-431e-8e84-4c2814fa6d45 928 0 2026-03-07 01:25:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f47778fd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc calico-apiserver-6f47778fd4-w44bp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie8e7d9796aa [] [] }} ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.190 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.427 [INFO][4894] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" HandleID="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.479 [INFO][4894] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" HandleID="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"calico-apiserver-6f47778fd4-w44bp", "timestamp":"2026-03-07 01:26:12.427340688 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004cf600)} Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.479 [INFO][4894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.622 [INFO][4894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.623 [INFO][4894] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.629 [INFO][4894] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.655 [INFO][4894] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.668 [INFO][4894] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.681 [INFO][4894] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.695 [INFO][4894] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.695 [INFO][4894] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.708 [INFO][4894] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.726 [INFO][4894] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.747 [INFO][4894] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.6/26] block=192.168.26.0/26 handle="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.747 [INFO][4894] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.6/26] handle="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.747 [INFO][4894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:12.921420 containerd[1792]: 2026-03-07 01:26:12.747 [INFO][4894] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.6/26] IPv6=[] ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" HandleID="k8s-pod-network.9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.796 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"e51adf89-b01f-431e-8e84-4c2814fa6d45", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"calico-apiserver-6f47778fd4-w44bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie8e7d9796aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.796 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.6/32] ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.796 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8e7d9796aa ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.868 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.874 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"e51adf89-b01f-431e-8e84-4c2814fa6d45", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c", Pod:"calico-apiserver-6f47778fd4-w44bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie8e7d9796aa", MAC:"fe:66:63:ed:c6:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:12.923187 containerd[1792]: 2026-03-07 01:26:12.891 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c" Namespace="calico-system" Pod="calico-apiserver-6f47778fd4-w44bp" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:12.949892 containerd[1792]: time="2026-03-07T01:26:12.949045507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:12.949892 containerd[1792]: time="2026-03-07T01:26:12.949101267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:12.949892 containerd[1792]: time="2026-03-07T01:26:12.949116827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:12.949892 containerd[1792]: time="2026-03-07T01:26:12.949199147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:13.064872 containerd[1792]: time="2026-03-07T01:26:13.064750992Z" level=info msg="StartContainer for \"5242df02d17e35cb24b64be2808f52094ad72a7aa477dfddc5503317d00aa650\" returns successfully" Mar 7 01:26:13.066011 containerd[1792]: time="2026-03-07T01:26:13.063502194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:13.069783 containerd[1792]: time="2026-03-07T01:26:13.066478470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:13.079566 containerd[1792]: time="2026-03-07T01:26:13.077920894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m29f2,Uid:d63cfcaa-807b-4baa-81f4-479a9dbfdd0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8\"" Mar 7 01:26:13.079566 containerd[1792]: time="2026-03-07T01:26:13.069343946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:13.079566 containerd[1792]: time="2026-03-07T01:26:13.069724545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:13.122828 containerd[1792]: time="2026-03-07T01:26:13.122789874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c8d85db8-ctmcz,Uid:e548dd02-7656-4281-b324-543590d586b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b\"" Mar 7 01:26:13.204196 containerd[1792]: time="2026-03-07T01:26:13.203388486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-w44bp,Uid:e51adf89-b01f-431e-8e84-4c2814fa6d45,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c\"" Mar 7 01:26:13.220885 containerd[1792]: time="2026-03-07T01:26:13.219435664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f47778fd4-2j8nd,Uid:581cbdc8-dccd-4487-b131-103b5bf24401,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e\"" Mar 7 01:26:13.239628 systemd-networkd[1380]: cali655c782d34c: Link UP Mar 7 01:26:13.240841 systemd-networkd[1380]: cali655c782d34c: Gained carrier Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.013 [INFO][5142] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0 whisker-68bb66c496- calico-system c12680d6-9928-4db9-850a-1fb0fcd2d7f9 952 0 2026-03-07 01:26:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68bb66c496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc whisker-68bb66c496-l98z4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali655c782d34c [] [] }} ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.013 [INFO][5142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.170 [INFO][5215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" HandleID="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.182 [INFO][5215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" HandleID="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038bd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"whisker-68bb66c496-l98z4", "timestamp":"2026-03-07 01:26:13.17062429 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003711e0)} Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.182 [INFO][5215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.182 [INFO][5215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.182 [INFO][5215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.198 [INFO][5215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.207 [INFO][5215] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.214 [INFO][5215] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.216 [INFO][5215] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.218 [INFO][5215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.218 [INFO][5215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.220 [INFO][5215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.224 [INFO][5215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.233 [INFO][5215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.7/26] block=192.168.26.0/26 handle="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.233 [INFO][5215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.7/26] handle="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.233 [INFO][5215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:13.259372 containerd[1792]: 2026-03-07 01:26:13.233 [INFO][5215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.7/26] IPv6=[] ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" HandleID="k8s-pod-network.5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.236 [INFO][5142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0", GenerateName:"whisker-68bb66c496-", Namespace:"calico-system", SelfLink:"", UID:"c12680d6-9928-4db9-850a-1fb0fcd2d7f9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bb66c496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"whisker-68bb66c496-l98z4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali655c782d34c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.236 [INFO][5142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.7/32] ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.236 [INFO][5142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali655c782d34c ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.240 [INFO][5142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.242 [INFO][5142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0", GenerateName:"whisker-68bb66c496-", Namespace:"calico-system", SelfLink:"", UID:"c12680d6-9928-4db9-850a-1fb0fcd2d7f9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68bb66c496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe", Pod:"whisker-68bb66c496-l98z4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali655c782d34c", MAC:"66:1b:47:c6:92:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:13.259884 containerd[1792]: 2026-03-07 01:26:13.255 [INFO][5142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe" Namespace="calico-system" Pod="whisker-68bb66c496-l98z4" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--68bb66c496--l98z4-eth0" Mar 7 01:26:13.309357 containerd[1792]: time="2026-03-07T01:26:13.307650826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:13.309357 containerd[1792]: time="2026-03-07T01:26:13.307841466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:13.309357 containerd[1792]: time="2026-03-07T01:26:13.307858426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:13.309357 containerd[1792]: time="2026-03-07T01:26:13.308065825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:13.323090 kubelet[3349]: I0307 01:26:13.322904 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-458z5" podStartSLOduration=35.322882285 podStartE2EDuration="35.322882285s" podCreationTimestamp="2026-03-07 01:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:26:13.320950448 +0000 UTC m=+41.375706818" watchObservedRunningTime="2026-03-07 01:26:13.322882285 +0000 UTC m=+41.377638655" Mar 7 01:26:13.377989 containerd[1792]: time="2026-03-07T01:26:13.377954291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68bb66c496-l98z4,Uid:c12680d6-9928-4db9-850a-1fb0fcd2d7f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe\"" Mar 7 01:26:13.544311 systemd-networkd[1380]: vxlan.calico: Link UP Mar 7 01:26:13.544317 systemd-networkd[1380]: vxlan.calico: Gained carrier Mar 7 01:26:13.699589 systemd-networkd[1380]: cali7486af25328: Gained IPv6LL Mar 7 01:26:14.083441 systemd-networkd[1380]: cali9d3dd54da08: Gained IPv6LL Mar 7 01:26:14.094087 kubelet[3349]: I0307 01:26:14.093894 3349 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d18621-91ec-4cdc-aef0-b9e05c8877a0" path="/var/lib/kubelet/pods/90d18621-91ec-4cdc-aef0-b9e05c8877a0/volumes" Mar 7 01:26:14.211548 systemd-networkd[1380]: calib095b0f40c2: Gained IPv6LL Mar 7 01:26:14.275516 systemd-networkd[1380]: cali9c208ebf924: Gained IPv6LL Mar 7 01:26:14.467452 systemd-networkd[1380]: cali65d601c98f0: Gained IPv6LL Mar 7 01:26:14.595538 systemd-networkd[1380]: calie8e7d9796aa: Gained IPv6LL Mar 7 01:26:15.162960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289428826.mount: Deactivated successfully. Mar 7 01:26:15.171986 systemd-networkd[1380]: cali655c782d34c: Gained IPv6LL Mar 7 01:26:15.235603 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Mar 7 01:26:15.817771 containerd[1792]: time="2026-03-07T01:26:15.817725895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:15.820702 containerd[1792]: time="2026-03-07T01:26:15.820472331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 01:26:15.825580 containerd[1792]: time="2026-03-07T01:26:15.825552925Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:15.836359 containerd[1792]: time="2026-03-07T01:26:15.834750992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:15.836359 containerd[1792]: time="2026-03-07T01:26:15.835504991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.974064286s" Mar 7 01:26:15.836359 containerd[1792]: time="2026-03-07T01:26:15.835528831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 01:26:15.839981 containerd[1792]: time="2026-03-07T01:26:15.839804626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:26:15.844659 containerd[1792]: time="2026-03-07T01:26:15.844636379Z" level=info msg="CreateContainer within sandbox \"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:26:15.874876 containerd[1792]: time="2026-03-07T01:26:15.874846058Z" level=info msg="CreateContainer within sandbox \"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb\"" Mar 7 01:26:15.876465 containerd[1792]: time="2026-03-07T01:26:15.875527658Z" level=info msg="StartContainer for \"745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb\"" Mar 7 01:26:15.904579 systemd[1]: run-containerd-runc-k8s.io-745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb-runc.BOSLPZ.mount: Deactivated successfully. Mar 7 01:26:15.940361 containerd[1792]: time="2026-03-07T01:26:15.940291171Z" level=info msg="StartContainer for \"745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb\" returns successfully" Mar 7 01:26:16.340572 kubelet[3349]: I0307 01:26:16.340513 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-nsg2r" podStartSLOduration=22.286986453 podStartE2EDuration="25.340497073s" podCreationTimestamp="2026-03-07 01:25:51 +0000 UTC" firstStartedPulling="2026-03-07 01:26:12.785487847 +0000 UTC m=+40.840244177" lastFinishedPulling="2026-03-07 01:26:15.838998467 +0000 UTC m=+43.893754797" observedRunningTime="2026-03-07 01:26:16.340387193 +0000 UTC m=+44.395143563" watchObservedRunningTime="2026-03-07 01:26:16.340497073 +0000 UTC m=+44.395253443" Mar 7 01:26:17.004898 containerd[1792]: time="2026-03-07T01:26:17.004852341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:17.007375 containerd[1792]: time="2026-03-07T01:26:17.007342618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 01:26:17.010625 containerd[1792]: time="2026-03-07T01:26:17.010327094Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:17.014367 containerd[1792]: time="2026-03-07T01:26:17.014292008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:17.015141 containerd[1792]: time="2026-03-07T01:26:17.015114087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.175276862s" Mar 7 01:26:17.015199 containerd[1792]: time="2026-03-07T01:26:17.015144367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 01:26:17.016621 containerd[1792]: time="2026-03-07T01:26:17.016603605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:26:17.022544 containerd[1792]: time="2026-03-07T01:26:17.022431837Z" level=info msg="CreateContainer within sandbox \"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:26:17.052825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539519433.mount: Deactivated successfully. Mar 7 01:26:17.063334 containerd[1792]: time="2026-03-07T01:26:17.063287383Z" level=info msg="CreateContainer within sandbox \"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"23fd75ccccc50b4b7eb91d2981f552fe90f44e529277397eac60812f86e830e6\"" Mar 7 01:26:17.063786 containerd[1792]: time="2026-03-07T01:26:17.063728222Z" level=info msg="StartContainer for \"23fd75ccccc50b4b7eb91d2981f552fe90f44e529277397eac60812f86e830e6\"" Mar 7 01:26:17.095506 systemd[1]: run-containerd-runc-k8s.io-23fd75ccccc50b4b7eb91d2981f552fe90f44e529277397eac60812f86e830e6-runc.DsIOxb.mount: Deactivated successfully. Mar 7 01:26:17.119996 containerd[1792]: time="2026-03-07T01:26:17.119963186Z" level=info msg="StartContainer for \"23fd75ccccc50b4b7eb91d2981f552fe90f44e529277397eac60812f86e830e6\" returns successfully" Mar 7 01:26:20.259338 containerd[1792]: time="2026-03-07T01:26:20.258619831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:20.261110 containerd[1792]: time="2026-03-07T01:26:20.260990589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 01:26:20.264189 containerd[1792]: time="2026-03-07T01:26:20.264009385Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:20.269793 containerd[1792]: time="2026-03-07T01:26:20.269319339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:20.270474 containerd[1792]: time="2026-03-07T01:26:20.270442218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.253721773s" Mar 7 01:26:20.270529 containerd[1792]: time="2026-03-07T01:26:20.270474418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 01:26:20.271890 containerd[1792]: time="2026-03-07T01:26:20.271869736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:26:20.292224 containerd[1792]: time="2026-03-07T01:26:20.292193113Z" level=info msg="CreateContainer within sandbox \"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:26:20.328639 containerd[1792]: time="2026-03-07T01:26:20.328593511Z" level=info msg="CreateContainer within sandbox \"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"138074d8c9953773c34c0decafdbb5cf3118eb255ba959fe596e521a26684e48\"" Mar 7 01:26:20.330276 containerd[1792]: time="2026-03-07T01:26:20.329593469Z" level=info msg="StartContainer for \"138074d8c9953773c34c0decafdbb5cf3118eb255ba959fe596e521a26684e48\"" Mar 7 01:26:20.398729 containerd[1792]: time="2026-03-07T01:26:20.398601390Z" level=info msg="StartContainer for \"138074d8c9953773c34c0decafdbb5cf3118eb255ba959fe596e521a26684e48\" returns successfully" Mar 7 01:26:21.359065 kubelet[3349]: I0307 01:26:21.358706 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55c8d85db8-ctmcz" podStartSLOduration=21.215491091 podStartE2EDuration="28.358691001s" podCreationTimestamp="2026-03-07 01:25:53 +0000 UTC" firstStartedPulling="2026-03-07 01:26:13.128542266 +0000 UTC m=+41.183298636" lastFinishedPulling="2026-03-07 01:26:20.271742176 +0000 UTC m=+48.326498546" observedRunningTime="2026-03-07 01:26:21.357893362 +0000 UTC m=+49.412650252" watchObservedRunningTime="2026-03-07 01:26:21.358691001 +0000 UTC m=+49.413447331" Mar 7 01:26:21.368137 systemd[1]: run-containerd-runc-k8s.io-138074d8c9953773c34c0decafdbb5cf3118eb255ba959fe596e521a26684e48-runc.fQyzlY.mount: Deactivated successfully. Mar 7 01:26:23.156087 containerd[1792]: time="2026-03-07T01:26:23.156037686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:23.158556 containerd[1792]: time="2026-03-07T01:26:23.158323043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 01:26:23.162313 containerd[1792]: time="2026-03-07T01:26:23.162257279Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:23.166834 containerd[1792]: time="2026-03-07T01:26:23.166785633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:23.167627 containerd[1792]: time="2026-03-07T01:26:23.167473513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.894541258s" Mar 7 01:26:23.167627 containerd[1792]: time="2026-03-07T01:26:23.167504833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 01:26:23.169997 containerd[1792]: time="2026-03-07T01:26:23.169195711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:26:23.173795 containerd[1792]: time="2026-03-07T01:26:23.173636666Z" level=info msg="CreateContainer within sandbox \"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:26:23.199548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188443523.mount: Deactivated successfully. Mar 7 01:26:23.210813 containerd[1792]: time="2026-03-07T01:26:23.210703743Z" level=info msg="CreateContainer within sandbox \"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ee6a894c5ecd5a608d5c8600ac9e364c7a67e975738ea004b29775fe9c93f6d\"" Mar 7 01:26:23.212126 containerd[1792]: time="2026-03-07T01:26:23.212092421Z" level=info msg="StartContainer for \"2ee6a894c5ecd5a608d5c8600ac9e364c7a67e975738ea004b29775fe9c93f6d\"" Mar 7 01:26:23.294260 containerd[1792]: time="2026-03-07T01:26:23.294190406Z" level=info msg="StartContainer for \"2ee6a894c5ecd5a608d5c8600ac9e364c7a67e975738ea004b29775fe9c93f6d\" returns successfully" Mar 7 01:26:23.482202 containerd[1792]: time="2026-03-07T01:26:23.481421190Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:23.483508 containerd[1792]: time="2026-03-07T01:26:23.483485108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:26:23.485789 containerd[1792]: time="2026-03-07T01:26:23.485765425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 316.537674ms" Mar 7 01:26:23.485898 containerd[1792]: time="2026-03-07T01:26:23.485883945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 01:26:23.486942 containerd[1792]: time="2026-03-07T01:26:23.486922184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:26:23.493158 containerd[1792]: time="2026-03-07T01:26:23.493135897Z" level=info msg="CreateContainer within sandbox \"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:26:23.532657 containerd[1792]: time="2026-03-07T01:26:23.532615531Z" level=info msg="CreateContainer within sandbox \"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b90ef84e4d7d9e62613c60a88b9d7e22c14817c315306f9658ee8d7eefd37695\"" Mar 7 01:26:23.533708 containerd[1792]: time="2026-03-07T01:26:23.533318090Z" level=info msg="StartContainer for \"b90ef84e4d7d9e62613c60a88b9d7e22c14817c315306f9658ee8d7eefd37695\"" Mar 7 01:26:23.599973 containerd[1792]: time="2026-03-07T01:26:23.599930013Z" level=info msg="StartContainer for \"b90ef84e4d7d9e62613c60a88b9d7e22c14817c315306f9658ee8d7eefd37695\" returns successfully" Mar 7 01:26:24.201009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113736128.mount: Deactivated successfully. Mar 7 01:26:24.744383 kubelet[3349]: I0307 01:26:24.355983 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:26:24.744383 kubelet[3349]: I0307 01:26:24.369312 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f47778fd4-w44bp" podStartSLOduration=23.407347095 podStartE2EDuration="33.369285805s" podCreationTimestamp="2026-03-07 01:25:51 +0000 UTC" firstStartedPulling="2026-03-07 01:26:13.206461722 +0000 UTC m=+41.261218092" lastFinishedPulling="2026-03-07 01:26:23.168400432 +0000 UTC m=+51.223156802" observedRunningTime="2026-03-07 01:26:23.362521087 +0000 UTC m=+51.417277537" watchObservedRunningTime="2026-03-07 01:26:24.369285805 +0000 UTC m=+52.424042175" Mar 7 01:26:24.744383 kubelet[3349]: I0307 01:26:24.369428 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f47778fd4-2j8nd" podStartSLOduration=23.104340202 podStartE2EDuration="33.369424405s" podCreationTimestamp="2026-03-07 01:25:51 +0000 UTC" firstStartedPulling="2026-03-07 01:26:13.221544941 +0000 UTC m=+41.276301311" lastFinishedPulling="2026-03-07 01:26:23.486629144 +0000 UTC m=+51.541385514" observedRunningTime="2026-03-07 01:26:24.368814526 +0000 UTC m=+52.423570896" watchObservedRunningTime="2026-03-07 01:26:24.369424405 +0000 UTC m=+52.424180775" Mar 7 01:26:25.092512 containerd[1792]: time="2026-03-07T01:26:25.092480450Z" level=info msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.166 [INFO][5743] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.167 [INFO][5743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" iface="eth0" netns="/var/run/netns/cni-fe64a714-d273-736f-d6ec-d28d680baa9b" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.167 [INFO][5743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" iface="eth0" netns="/var/run/netns/cni-fe64a714-d273-736f-d6ec-d28d680baa9b" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.167 [INFO][5743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" iface="eth0" netns="/var/run/netns/cni-fe64a714-d273-736f-d6ec-d28d680baa9b" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.167 [INFO][5743] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.167 [INFO][5743] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.197 [INFO][5750] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.197 [INFO][5750] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.197 [INFO][5750] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.212 [WARNING][5750] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.212 [INFO][5750] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.213 [INFO][5750] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:25.219095 containerd[1792]: 2026-03-07 01:26:25.217 [INFO][5743] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:25.222051 containerd[1792]: time="2026-03-07T01:26:25.221723221Z" level=info msg="TearDown network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" successfully" Mar 7 01:26:25.222199 containerd[1792]: time="2026-03-07T01:26:25.221753141Z" level=info msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" returns successfully" Mar 7 01:26:25.223489 containerd[1792]: time="2026-03-07T01:26:25.223461699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4xdw,Uid:9a16e5b6-f499-4738-9cd6-aee9c02063e5,Namespace:kube-system,Attempt:1,}" Mar 7 01:26:25.224813 systemd[1]: run-netns-cni\x2dfe64a714\x2dd273\x2d736f\x2dd6ec\x2dd28d680baa9b.mount: Deactivated successfully. Mar 7 01:26:25.358542 kubelet[3349]: I0307 01:26:25.358397 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:26:25.390900 systemd-networkd[1380]: cali167b4b014bb: Link UP Mar 7 01:26:25.392224 systemd-networkd[1380]: cali167b4b014bb: Gained carrier Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.323 [INFO][5757] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0 coredns-674b8bbfcf- kube-system 9a16e5b6-f499-4738-9cd6-aee9c02063e5 1051 0 2026-03-07 01:25:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-0072e04abc coredns-674b8bbfcf-l4xdw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali167b4b014bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.323 [INFO][5757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.346 [INFO][5770] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" HandleID="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.355 [INFO][5770] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" HandleID="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-0072e04abc", "pod":"coredns-674b8bbfcf-l4xdw", "timestamp":"2026-03-07 01:26:25.346698996 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0072e04abc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000376000)} Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.355 [INFO][5770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.355 [INFO][5770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.356 [INFO][5770] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0072e04abc' Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.358 [INFO][5770] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.363 [INFO][5770] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.366 [INFO][5770] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.367 [INFO][5770] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.370 [INFO][5770] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.370 [INFO][5770] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.371 [INFO][5770] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392 Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.375 [INFO][5770] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.384 [INFO][5770] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.8/26] block=192.168.26.0/26 handle="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.384 [INFO][5770] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.8/26] handle="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" host="ci-4081.3.6-n-0072e04abc" Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.384 [INFO][5770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:25.410383 containerd[1792]: 2026-03-07 01:26:25.384 [INFO][5770] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.8/26] IPv6=[] ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" HandleID="k8s-pod-network.8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.388 [INFO][5757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a16e5b6-f499-4738-9cd6-aee9c02063e5", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"", Pod:"coredns-674b8bbfcf-l4xdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167b4b014bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.388 [INFO][5757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.8/32] ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.388 [INFO][5757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali167b4b014bb ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.392 [INFO][5757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.392 [INFO][5757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a16e5b6-f499-4738-9cd6-aee9c02063e5", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392", Pod:"coredns-674b8bbfcf-l4xdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167b4b014bb", MAC:"6e:0c:90:9f:f6:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:25.410934 containerd[1792]: 2026-03-07 01:26:25.407 [INFO][5757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392" Namespace="kube-system" Pod="coredns-674b8bbfcf-l4xdw" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:25.445766 containerd[1792]: time="2026-03-07T01:26:25.444770043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:26:25.445766 containerd[1792]: time="2026-03-07T01:26:25.444846803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:26:25.445766 containerd[1792]: time="2026-03-07T01:26:25.444866483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:25.445766 containerd[1792]: time="2026-03-07T01:26:25.444981443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:26:25.509394 containerd[1792]: time="2026-03-07T01:26:25.509286929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4xdw,Uid:9a16e5b6-f499-4738-9cd6-aee9c02063e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392\"" Mar 7 01:26:25.526621 containerd[1792]: time="2026-03-07T01:26:25.526584389Z" level=info msg="CreateContainer within sandbox \"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:26:25.566756 containerd[1792]: time="2026-03-07T01:26:25.566710742Z" level=info msg="CreateContainer within sandbox \"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4b45691193bbeb0c98b7a4284ddd57659a5465c36f4eaafb80030f34c39b6a0\"" Mar 7 01:26:25.567836 containerd[1792]: time="2026-03-07T01:26:25.567661901Z" level=info msg="StartContainer for \"a4b45691193bbeb0c98b7a4284ddd57659a5465c36f4eaafb80030f34c39b6a0\"" Mar 7 01:26:25.613668 containerd[1792]: time="2026-03-07T01:26:25.613562728Z" level=info msg="StartContainer for \"a4b45691193bbeb0c98b7a4284ddd57659a5465c36f4eaafb80030f34c39b6a0\" returns successfully" Mar 7 01:26:26.086120 containerd[1792]: time="2026-03-07T01:26:26.086005543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:26.089630 containerd[1792]: time="2026-03-07T01:26:26.089410139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 01:26:26.094827 containerd[1792]: time="2026-03-07T01:26:26.094796133Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:26.103814 containerd[1792]: time="2026-03-07T01:26:26.103336803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:26.104349 containerd[1792]: time="2026-03-07T01:26:26.104278322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 2.617159098s" Mar 7 01:26:26.104405 containerd[1792]: time="2026-03-07T01:26:26.104353162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 01:26:26.106693 containerd[1792]: time="2026-03-07T01:26:26.105660600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:26:26.112170 containerd[1792]: time="2026-03-07T01:26:26.112147153Z" level=info msg="CreateContainer within sandbox \"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:26:26.154903 containerd[1792]: time="2026-03-07T01:26:26.154857423Z" level=info msg="CreateContainer within sandbox \"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d16e2a8910d76d15a340af60c20a72708698b6096408768d14fc4eeca8d2ef4e\"" Mar 7 01:26:26.155972 containerd[1792]: time="2026-03-07T01:26:26.155786782Z" level=info msg="StartContainer for \"d16e2a8910d76d15a340af60c20a72708698b6096408768d14fc4eeca8d2ef4e\"" Mar 7 01:26:26.214320 containerd[1792]: time="2026-03-07T01:26:26.212974236Z" level=info msg="StartContainer for \"d16e2a8910d76d15a340af60c20a72708698b6096408768d14fc4eeca8d2ef4e\" returns successfully" Mar 7 01:26:26.378635 kubelet[3349]: I0307 01:26:26.377511 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l4xdw" podStartSLOduration=48.377492486 podStartE2EDuration="48.377492486s" podCreationTimestamp="2026-03-07 01:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:26:26.376759727 +0000 UTC m=+54.431516097" watchObservedRunningTime="2026-03-07 01:26:26.377492486 +0000 UTC m=+54.432248856" Mar 7 01:26:26.563423 systemd-networkd[1380]: cali167b4b014bb: Gained IPv6LL Mar 7 01:26:27.289940 containerd[1792]: time="2026-03-07T01:26:27.289895953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:27.293130 containerd[1792]: time="2026-03-07T01:26:27.293097669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 01:26:27.296284 containerd[1792]: time="2026-03-07T01:26:27.296243705Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:27.301091 containerd[1792]: time="2026-03-07T01:26:27.301065500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:27.301833 containerd[1792]: time="2026-03-07T01:26:27.301797379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.196105859s" Mar 7 01:26:27.301931 containerd[1792]: time="2026-03-07T01:26:27.301914859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 01:26:27.303049 containerd[1792]: time="2026-03-07T01:26:27.303027138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:26:27.309567 containerd[1792]: time="2026-03-07T01:26:27.309445210Z" level=info msg="CreateContainer within sandbox \"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:26:27.346361 containerd[1792]: time="2026-03-07T01:26:27.346321968Z" level=info msg="CreateContainer within sandbox \"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c7ab5852f998eab685bd3198c42d46a7eafe310156b17e7e646af4a1d11a614\"" Mar 7 01:26:27.347907 containerd[1792]: time="2026-03-07T01:26:27.347881606Z" level=info msg="StartContainer for \"3c7ab5852f998eab685bd3198c42d46a7eafe310156b17e7e646af4a1d11a614\"" Mar 7 01:26:27.410423 containerd[1792]: time="2026-03-07T01:26:27.410359694Z" level=info msg="StartContainer for \"3c7ab5852f998eab685bd3198c42d46a7eafe310156b17e7e646af4a1d11a614\" returns successfully" Mar 7 01:26:28.171587 kubelet[3349]: I0307 01:26:28.171550 3349 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:26:28.171587 kubelet[3349]: I0307 01:26:28.171591 3349 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:26:29.078849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460863962.mount: Deactivated successfully. Mar 7 01:26:29.131335 containerd[1792]: time="2026-03-07T01:26:29.131082236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:29.134507 containerd[1792]: time="2026-03-07T01:26:29.134324952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 01:26:29.137305 containerd[1792]: time="2026-03-07T01:26:29.137218988Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:29.141686 containerd[1792]: time="2026-03-07T01:26:29.141642182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:26:29.142534 containerd[1792]: time="2026-03-07T01:26:29.142416981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.839360323s" Mar 7 01:26:29.142534 containerd[1792]: time="2026-03-07T01:26:29.142449181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 01:26:29.150925 containerd[1792]: time="2026-03-07T01:26:29.150839250Z" level=info msg="CreateContainer within sandbox \"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:26:29.185521 containerd[1792]: time="2026-03-07T01:26:29.185412323Z" level=info msg="CreateContainer within sandbox \"5256f035c32167953375c626567ae72e089b4e64eb06cbf9b5a1c12bed09a2fe\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"95742c523db206e0adf5906a00223e93a050bbc3b9b744353febafcd9798fae9\"" Mar 7 01:26:29.188265 containerd[1792]: time="2026-03-07T01:26:29.185890042Z" level=info msg="StartContainer for \"95742c523db206e0adf5906a00223e93a050bbc3b9b744353febafcd9798fae9\"" Mar 7 01:26:29.250939 containerd[1792]: time="2026-03-07T01:26:29.250684955Z" level=info msg="StartContainer for \"95742c523db206e0adf5906a00223e93a050bbc3b9b744353febafcd9798fae9\" returns successfully" Mar 7 01:26:29.392149 kubelet[3349]: I0307 01:26:29.390750 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68bb66c496-l98z4" podStartSLOduration=1.631303589 podStartE2EDuration="17.390733886s" podCreationTimestamp="2026-03-07 01:26:12 +0000 UTC" firstStartedPulling="2026-03-07 01:26:13.383909163 +0000 UTC m=+41.438665533" lastFinishedPulling="2026-03-07 01:26:29.14333946 +0000 UTC m=+57.198095830" observedRunningTime="2026-03-07 01:26:29.390493767 +0000 UTC m=+57.445250097" watchObservedRunningTime="2026-03-07 01:26:29.390733886 +0000 UTC m=+57.445490256" Mar 7 01:26:29.392149 kubelet[3349]: I0307 01:26:29.390939 3349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m29f2" podStartSLOduration=22.210198823 podStartE2EDuration="36.390934446s" podCreationTimestamp="2026-03-07 01:25:53 +0000 UTC" firstStartedPulling="2026-03-07 01:26:13.122170555 +0000 UTC m=+41.176926925" lastFinishedPulling="2026-03-07 01:26:27.302906178 +0000 UTC m=+55.357662548" observedRunningTime="2026-03-07 01:26:28.390247355 +0000 UTC m=+56.445003725" watchObservedRunningTime="2026-03-07 01:26:29.390934446 +0000 UTC m=+57.445690776" Mar 7 01:26:29.896651 systemd[1]: run-containerd-runc-k8s.io-95742c523db206e0adf5906a00223e93a050bbc3b9b744353febafcd9798fae9-runc.mjMPmX.mount: Deactivated successfully. Mar 7 01:26:32.075724 containerd[1792]: time="2026-03-07T01:26:32.075646348Z" level=info msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.115 [WARNING][6020] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.115 [INFO][6020] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.115 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" iface="eth0" netns="" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.115 [INFO][6020] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.115 [INFO][6020] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.135 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.135 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.135 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.145 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.145 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.147 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.151751 containerd[1792]: 2026-03-07 01:26:32.150 [INFO][6020] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.152099 containerd[1792]: time="2026-03-07T01:26:32.151788645Z" level=info msg="TearDown network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" successfully" Mar 7 01:26:32.152099 containerd[1792]: time="2026-03-07T01:26:32.151812205Z" level=info msg="StopPodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" returns successfully" Mar 7 01:26:32.152814 containerd[1792]: time="2026-03-07T01:26:32.152535364Z" level=info msg="RemovePodSandbox for \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" Mar 7 01:26:32.152814 containerd[1792]: time="2026-03-07T01:26:32.152565164Z" level=info msg="Forcibly stopping sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\"" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.184 [WARNING][6043] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" WorkloadEndpoint="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.184 [INFO][6043] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.184 [INFO][6043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" iface="eth0" netns="" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.184 [INFO][6043] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.185 [INFO][6043] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.202 [INFO][6050] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.202 [INFO][6050] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.202 [INFO][6050] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.211 [WARNING][6050] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.212 [INFO][6050] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" HandleID="k8s-pod-network.8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Workload="ci--4081.3.6--n--0072e04abc-k8s-whisker--57d95b7b87--cv65p-eth0" Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.213 [INFO][6050] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.217316 containerd[1792]: 2026-03-07 01:26:32.215 [INFO][6043] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a" Mar 7 01:26:32.217316 containerd[1792]: time="2026-03-07T01:26:32.216611438Z" level=info msg="TearDown network for sandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" successfully" Mar 7 01:26:32.230604 containerd[1792]: time="2026-03-07T01:26:32.230566659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.230792 containerd[1792]: time="2026-03-07T01:26:32.230777498Z" level=info msg="RemovePodSandbox \"8e5987eb0709162c431e4cd9a6539e35a3a484597f74045910b5e01fb639fd8a\" returns successfully" Mar 7 01:26:32.231420 containerd[1792]: time="2026-03-07T01:26:32.231386498Z" level=info msg="StopPodSandbox for \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\"" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.261 [WARNING][6064] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0", GenerateName:"calico-kube-controllers-55c8d85db8-", Namespace:"calico-system", SelfLink:"", UID:"e548dd02-7656-4281-b324-543590d586b1", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8d85db8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b", Pod:"calico-kube-controllers-55c8d85db8-ctmcz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib095b0f40c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.262 [INFO][6064] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.262 [INFO][6064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" iface="eth0" netns="" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.262 [INFO][6064] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.262 [INFO][6064] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.280 [INFO][6071] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.280 [INFO][6071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.280 [INFO][6071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.289 [WARNING][6071] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.289 [INFO][6071] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.290 [INFO][6071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.293930 containerd[1792]: 2026-03-07 01:26:32.292 [INFO][6064] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.294486 containerd[1792]: time="2026-03-07T01:26:32.293971253Z" level=info msg="TearDown network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" successfully" Mar 7 01:26:32.294486 containerd[1792]: time="2026-03-07T01:26:32.293997813Z" level=info msg="StopPodSandbox for \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" returns successfully" Mar 7 01:26:32.294571 containerd[1792]: time="2026-03-07T01:26:32.294544333Z" level=info msg="RemovePodSandbox for \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\"" Mar 7 01:26:32.294611 containerd[1792]: time="2026-03-07T01:26:32.294577092Z" level=info msg="Forcibly stopping sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\"" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.339 [WARNING][6085] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0", GenerateName:"calico-kube-controllers-55c8d85db8-", Namespace:"calico-system", SelfLink:"", UID:"e548dd02-7656-4281-b324-543590d586b1", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c8d85db8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"0c6e8f0082d7aa25cceaba44391b7faaef89408afd13ae5f96c89ef37408782b", Pod:"calico-kube-controllers-55c8d85db8-ctmcz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib095b0f40c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.339 [INFO][6085] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.339 [INFO][6085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" iface="eth0" netns="" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.339 [INFO][6085] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.339 [INFO][6085] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.361 [INFO][6092] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.361 [INFO][6092] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.361 [INFO][6092] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.370 [WARNING][6092] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.370 [INFO][6092] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" HandleID="k8s-pod-network.8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--kube--controllers--55c8d85db8--ctmcz-eth0" Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.371 [INFO][6092] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.376066 containerd[1792]: 2026-03-07 01:26:32.373 [INFO][6085] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a" Mar 7 01:26:32.376066 containerd[1792]: time="2026-03-07T01:26:32.374765184Z" level=info msg="TearDown network for sandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" successfully" Mar 7 01:26:32.381542 containerd[1792]: time="2026-03-07T01:26:32.381428295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.381542 containerd[1792]: time="2026-03-07T01:26:32.381485855Z" level=info msg="RemovePodSandbox \"8fb7535d1ff3fdf9ccb024f9c6a310737fcd4d1b7fd76125c5f8f246eb16334a\" returns successfully" Mar 7 01:26:32.381920 containerd[1792]: time="2026-03-07T01:26:32.381891975Z" level=info msg="StopPodSandbox for \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\"" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.411 [WARNING][6106] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"e51adf89-b01f-431e-8e84-4c2814fa6d45", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c", Pod:"calico-apiserver-6f47778fd4-w44bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie8e7d9796aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.411 [INFO][6106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.411 [INFO][6106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" iface="eth0" netns="" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.412 [INFO][6106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.412 [INFO][6106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.429 [INFO][6113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.430 [INFO][6113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.430 [INFO][6113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.438 [WARNING][6113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.438 [INFO][6113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.439 [INFO][6113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.442920 containerd[1792]: 2026-03-07 01:26:32.441 [INFO][6106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.443320 containerd[1792]: time="2026-03-07T01:26:32.442964092Z" level=info msg="TearDown network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" successfully" Mar 7 01:26:32.443320 containerd[1792]: time="2026-03-07T01:26:32.442988772Z" level=info msg="StopPodSandbox for \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" returns successfully" Mar 7 01:26:32.443558 containerd[1792]: time="2026-03-07T01:26:32.443531492Z" level=info msg="RemovePodSandbox for \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\"" Mar 7 01:26:32.443592 containerd[1792]: time="2026-03-07T01:26:32.443567692Z" level=info msg="Forcibly stopping sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\"" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.473 [WARNING][6127] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"e51adf89-b01f-431e-8e84-4c2814fa6d45", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"9a7a56146d1efc895603665671675bbc3b092f00982f00178aa15150e8e4658c", Pod:"calico-apiserver-6f47778fd4-w44bp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie8e7d9796aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.473 [INFO][6127] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.473 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" iface="eth0" netns="" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.473 [INFO][6127] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.473 [INFO][6127] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.490 [INFO][6134] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.490 [INFO][6134] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.490 [INFO][6134] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.498 [WARNING][6134] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.498 [INFO][6134] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" HandleID="k8s-pod-network.6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--w44bp-eth0" Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.500 [INFO][6134] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.503194 containerd[1792]: 2026-03-07 01:26:32.501 [INFO][6127] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8" Mar 7 01:26:32.503597 containerd[1792]: time="2026-03-07T01:26:32.503235771Z" level=info msg="TearDown network for sandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" successfully" Mar 7 01:26:32.510873 containerd[1792]: time="2026-03-07T01:26:32.510821281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.510944 containerd[1792]: time="2026-03-07T01:26:32.510913161Z" level=info msg="RemovePodSandbox \"6dcd1a96e42783ce624fd1a5950b3b8474c96b20233fc1e09c489a9eae3bfcf8\" returns successfully" Mar 7 01:26:32.511402 containerd[1792]: time="2026-03-07T01:26:32.511378920Z" level=info msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.542 [WARNING][6148] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a16e5b6-f499-4738-9cd6-aee9c02063e5", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392", Pod:"coredns-674b8bbfcf-l4xdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167b4b014bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.543 [INFO][6148] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.543 [INFO][6148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" iface="eth0" netns="" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.543 [INFO][6148] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.543 [INFO][6148] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.561 [INFO][6155] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.561 [INFO][6155] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.561 [INFO][6155] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.569 [WARNING][6155] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.569 [INFO][6155] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.571 [INFO][6155] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.574490 containerd[1792]: 2026-03-07 01:26:32.572 [INFO][6148] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.574979 containerd[1792]: time="2026-03-07T01:26:32.574525035Z" level=info msg="TearDown network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" successfully" Mar 7 01:26:32.574979 containerd[1792]: time="2026-03-07T01:26:32.574549955Z" level=info msg="StopPodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" returns successfully" Mar 7 01:26:32.575290 containerd[1792]: time="2026-03-07T01:26:32.575265674Z" level=info msg="RemovePodSandbox for \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" Mar 7 01:26:32.575290 containerd[1792]: time="2026-03-07T01:26:32.575310634Z" level=info msg="Forcibly stopping sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\"" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.614 [WARNING][6170] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a16e5b6-f499-4738-9cd6-aee9c02063e5", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"8ebbc1967654050e6da7c680953cf1389cb39962c5fab133f6520855d2171392", Pod:"coredns-674b8bbfcf-l4xdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali167b4b014bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.614 [INFO][6170] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.614 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" iface="eth0" netns="" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.614 [INFO][6170] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.614 [INFO][6170] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.632 [INFO][6177] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.632 [INFO][6177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.632 [INFO][6177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.640 [WARNING][6177] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.640 [INFO][6177] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" HandleID="k8s-pod-network.e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--l4xdw-eth0" Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.641 [INFO][6177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.645195 containerd[1792]: 2026-03-07 01:26:32.643 [INFO][6170] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0" Mar 7 01:26:32.646967 containerd[1792]: time="2026-03-07T01:26:32.645464460Z" level=info msg="TearDown network for sandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" successfully" Mar 7 01:26:32.654721 containerd[1792]: time="2026-03-07T01:26:32.654689287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.654790 containerd[1792]: time="2026-03-07T01:26:32.654765487Z" level=info msg="RemovePodSandbox \"e5a97ff0b2e597b4bc839def9a2e4baac1fe6ae910617655936ea88ac69f03d0\" returns successfully" Mar 7 01:26:32.655183 containerd[1792]: time="2026-03-07T01:26:32.655160046Z" level=info msg="StopPodSandbox for \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\"" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.686 [WARNING][6191] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"581cbdc8-dccd-4487-b131-103b5bf24401", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e", Pod:"calico-apiserver-6f47778fd4-2j8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65d601c98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.686 [INFO][6191] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.686 [INFO][6191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" iface="eth0" netns="" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.686 [INFO][6191] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.686 [INFO][6191] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.704 [INFO][6198] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.704 [INFO][6198] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.704 [INFO][6198] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.712 [WARNING][6198] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.712 [INFO][6198] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.713 [INFO][6198] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.716905 containerd[1792]: 2026-03-07 01:26:32.715 [INFO][6191] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.717412 containerd[1792]: time="2026-03-07T01:26:32.716944483Z" level=info msg="TearDown network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" successfully" Mar 7 01:26:32.717412 containerd[1792]: time="2026-03-07T01:26:32.716969403Z" level=info msg="StopPodSandbox for \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" returns successfully" Mar 7 01:26:32.717695 containerd[1792]: time="2026-03-07T01:26:32.717659322Z" level=info msg="RemovePodSandbox for \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\"" Mar 7 01:26:32.717736 containerd[1792]: time="2026-03-07T01:26:32.717703602Z" level=info msg="Forcibly stopping sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\"" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.748 [WARNING][6212] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0", GenerateName:"calico-apiserver-6f47778fd4-", Namespace:"calico-system", SelfLink:"", UID:"581cbdc8-dccd-4487-b131-103b5bf24401", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f47778fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"4d1c4961480bad066ff952302ff324f66afff100d6e0c70ea0314195287c7d1e", Pod:"calico-apiserver-6f47778fd4-2j8nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65d601c98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.748 [INFO][6212] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.748 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" iface="eth0" netns="" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.749 [INFO][6212] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.749 [INFO][6212] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.767 [INFO][6219] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.767 [INFO][6219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.767 [INFO][6219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.776 [WARNING][6219] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.776 [INFO][6219] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" HandleID="k8s-pod-network.b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Workload="ci--4081.3.6--n--0072e04abc-k8s-calico--apiserver--6f47778fd4--2j8nd-eth0" Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.777 [INFO][6219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.780587 containerd[1792]: 2026-03-07 01:26:32.779 [INFO][6212] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398" Mar 7 01:26:32.780984 containerd[1792]: time="2026-03-07T01:26:32.780628277Z" level=info msg="TearDown network for sandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" successfully" Mar 7 01:26:32.788065 containerd[1792]: time="2026-03-07T01:26:32.788031987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.788136 containerd[1792]: time="2026-03-07T01:26:32.788108347Z" level=info msg="RemovePodSandbox \"b0a751ad21b7253948adde56901a57c94c8b998c22398c8139ee92387dddc398\" returns successfully" Mar 7 01:26:32.788606 containerd[1792]: time="2026-03-07T01:26:32.788580147Z" level=info msg="StopPodSandbox for \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\"" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.820 [WARNING][6233] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cc35d0d5-b5ae-4bde-a7b6-5af37327d945", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81", Pod:"goldmane-5b85766d88-nsg2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9d3dd54da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.820 [INFO][6233] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.820 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" iface="eth0" netns="" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.820 [INFO][6233] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.820 [INFO][6233] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.847 [INFO][6240] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.847 [INFO][6240] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.847 [INFO][6240] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.857 [WARNING][6240] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.857 [INFO][6240] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.858 [INFO][6240] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.862448 containerd[1792]: 2026-03-07 01:26:32.860 [INFO][6233] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.862448 containerd[1792]: time="2026-03-07T01:26:32.862409087Z" level=info msg="TearDown network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" successfully" Mar 7 01:26:32.862448 containerd[1792]: time="2026-03-07T01:26:32.862430767Z" level=info msg="StopPodSandbox for \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" returns successfully" Mar 7 01:26:32.863506 containerd[1792]: time="2026-03-07T01:26:32.862838127Z" level=info msg="RemovePodSandbox for \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\"" Mar 7 01:26:32.863506 containerd[1792]: time="2026-03-07T01:26:32.862865766Z" level=info msg="Forcibly stopping sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\"" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.899 [WARNING][6254] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cc35d0d5-b5ae-4bde-a7b6-5af37327d945", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"2887391bfa16e72e28f5ef694b4104dd1575e0b45d20d501aa127875e431cd81", Pod:"goldmane-5b85766d88-nsg2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9d3dd54da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.899 [INFO][6254] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.899 [INFO][6254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" iface="eth0" netns="" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.899 [INFO][6254] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.899 [INFO][6254] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.918 [INFO][6261] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.918 [INFO][6261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.918 [INFO][6261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.928 [WARNING][6261] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.928 [INFO][6261] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" HandleID="k8s-pod-network.4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Workload="ci--4081.3.6--n--0072e04abc-k8s-goldmane--5b85766d88--nsg2r-eth0" Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.929 [INFO][6261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:32.933026 containerd[1792]: 2026-03-07 01:26:32.931 [INFO][6254] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64" Mar 7 01:26:32.934802 containerd[1792]: time="2026-03-07T01:26:32.933491671Z" level=info msg="TearDown network for sandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" successfully" Mar 7 01:26:32.948720 containerd[1792]: time="2026-03-07T01:26:32.948681051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:32.948818 containerd[1792]: time="2026-03-07T01:26:32.948763931Z" level=info msg="RemovePodSandbox \"4fef56ffc107e575029239a3f3251d663fbe73eb49b3b3c0cd3c9b0b777c7d64\" returns successfully" Mar 7 01:26:32.949279 containerd[1792]: time="2026-03-07T01:26:32.949257170Z" level=info msg="StopPodSandbox for \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\"" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:32.985 [WARNING][6276] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"198e2d2d-ab2f-4ba4-877d-5a441bf77609", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5", Pod:"coredns-674b8bbfcf-458z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7486af25328", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:32.985 [INFO][6276] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:32.985 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" iface="eth0" netns="" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:32.985 [INFO][6276] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:32.985 [INFO][6276] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.003 [INFO][6283] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.004 [INFO][6283] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.004 [INFO][6283] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.012 [WARNING][6283] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.012 [INFO][6283] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.013 [INFO][6283] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:33.016512 containerd[1792]: 2026-03-07 01:26:33.015 [INFO][6276] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.017423 containerd[1792]: time="2026-03-07T01:26:33.016562279Z" level=info msg="TearDown network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" successfully" Mar 7 01:26:33.017423 containerd[1792]: time="2026-03-07T01:26:33.016587159Z" level=info msg="StopPodSandbox for \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" returns successfully" Mar 7 01:26:33.017423 containerd[1792]: time="2026-03-07T01:26:33.016987999Z" level=info msg="RemovePodSandbox for \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\"" Mar 7 01:26:33.017423 containerd[1792]: time="2026-03-07T01:26:33.017019159Z" level=info msg="Forcibly stopping sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\"" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.050 [WARNING][6297] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"198e2d2d-ab2f-4ba4-877d-5a441bf77609", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"bcfbbb248aca0f66d02eb55b84b4625c770cbf0ee6ffbeddf15b9654aae69af5", Pod:"coredns-674b8bbfcf-458z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7486af25328", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.051 [INFO][6297] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.051 [INFO][6297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" iface="eth0" netns="" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.051 [INFO][6297] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.051 [INFO][6297] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.069 [INFO][6304] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.069 [INFO][6304] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.069 [INFO][6304] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.078 [WARNING][6304] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.078 [INFO][6304] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" HandleID="k8s-pod-network.2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Workload="ci--4081.3.6--n--0072e04abc-k8s-coredns--674b8bbfcf--458z5-eth0" Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.079 [INFO][6304] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:33.082686 containerd[1792]: 2026-03-07 01:26:33.081 [INFO][6297] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a" Mar 7 01:26:33.084209 containerd[1792]: time="2026-03-07T01:26:33.083381069Z" level=info msg="TearDown network for sandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" successfully" Mar 7 01:26:33.090018 containerd[1792]: time="2026-03-07T01:26:33.089983580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:33.090116 containerd[1792]: time="2026-03-07T01:26:33.090059860Z" level=info msg="RemovePodSandbox \"2810e0f35077fbe292a9b5bdac6b71def78cce6678af48e6672cd88cd79cf20a\" returns successfully" Mar 7 01:26:33.090755 containerd[1792]: time="2026-03-07T01:26:33.090485300Z" level=info msg="StopPodSandbox for \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\"" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.120 [WARNING][6318] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8", Pod:"csi-node-driver-m29f2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c208ebf924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.120 [INFO][6318] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.120 [INFO][6318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" iface="eth0" netns="" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.120 [INFO][6318] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.120 [INFO][6318] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.137 [INFO][6325] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.137 [INFO][6325] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.137 [INFO][6325] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.146 [WARNING][6325] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.146 [INFO][6325] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.147 [INFO][6325] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:33.151073 containerd[1792]: 2026-03-07 01:26:33.149 [INFO][6318] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.151659 containerd[1792]: time="2026-03-07T01:26:33.151541057Z" level=info msg="TearDown network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" successfully" Mar 7 01:26:33.151659 containerd[1792]: time="2026-03-07T01:26:33.151570857Z" level=info msg="StopPodSandbox for \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" returns successfully" Mar 7 01:26:33.151997 containerd[1792]: time="2026-03-07T01:26:33.151968657Z" level=info msg="RemovePodSandbox for \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\"" Mar 7 01:26:33.152035 containerd[1792]: time="2026-03-07T01:26:33.152006217Z" level=info msg="Forcibly stopping sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\"" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.183 [WARNING][6339] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d63cfcaa-807b-4baa-81f4-479a9dbfdd0f", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0072e04abc", ContainerID:"a3da908faabfdd7c4c103743dbd1fb26d92f395bf18b097148c5683cf7c2b8f8", Pod:"csi-node-driver-m29f2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c208ebf924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.183 [INFO][6339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.183 [INFO][6339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" iface="eth0" netns="" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.183 [INFO][6339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.183 [INFO][6339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.201 [INFO][6346] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.201 [INFO][6346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.201 [INFO][6346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.209 [WARNING][6346] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.209 [INFO][6346] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" HandleID="k8s-pod-network.d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Workload="ci--4081.3.6--n--0072e04abc-k8s-csi--node--driver--m29f2-eth0" Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.211 [INFO][6346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:26:33.214463 containerd[1792]: 2026-03-07 01:26:33.212 [INFO][6339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c" Mar 7 01:26:33.214463 containerd[1792]: time="2026-03-07T01:26:33.214441133Z" level=info msg="TearDown network for sandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" successfully" Mar 7 01:26:33.221091 containerd[1792]: time="2026-03-07T01:26:33.221053284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:26:33.221195 containerd[1792]: time="2026-03-07T01:26:33.221124884Z" level=info msg="RemovePodSandbox \"d812795bcab9ab6f4de0cb2baf9f9b9c1d2f5a6c1324f062a34318b4ff1ad96c\" returns successfully" Mar 7 01:26:35.285321 kubelet[3349]: I0307 01:26:35.285013 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:26:56.434371 kubelet[3349]: I0307 01:26:56.433089 3349 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:27:12.303588 systemd[1]: run-containerd-runc-k8s.io-d5c8882f4369359ce9f1ee6ab7ae038c7989f5f0979459c9b91a968ac9540454-runc.28ncZq.mount: Deactivated successfully. Mar 7 01:27:16.346680 systemd[1]: run-containerd-runc-k8s.io-745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb-runc.OAnLxm.mount: Deactivated successfully. Mar 7 01:27:16.872518 systemd[1]: Started sshd@7-10.200.20.23:22-10.200.16.10:45492.service - OpenSSH per-connection server daemon (10.200.16.10:45492). Mar 7 01:27:17.358641 sshd[6554]: Accepted publickey for core from 10.200.16.10 port 45492 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:17.360635 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:17.365217 systemd-logind[1773]: New session 10 of user core. Mar 7 01:27:17.369595 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:27:17.775516 sshd[6554]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:17.778939 systemd[1]: sshd@7-10.200.20.23:22-10.200.16.10:45492.service: Deactivated successfully. Mar 7 01:27:17.783725 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:27:17.785701 systemd-logind[1773]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:27:17.786554 systemd-logind[1773]: Removed session 10. Mar 7 01:27:22.861615 systemd[1]: Started sshd@8-10.200.20.23:22-10.200.16.10:51188.service - OpenSSH per-connection server daemon (10.200.16.10:51188). Mar 7 01:27:23.351476 sshd[6590]: Accepted publickey for core from 10.200.16.10 port 51188 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:23.353183 sshd[6590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:23.359188 systemd-logind[1773]: New session 11 of user core. Mar 7 01:27:23.368604 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:27:23.785514 sshd[6590]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:23.788390 systemd-logind[1773]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:27:23.791623 systemd[1]: sshd@8-10.200.20.23:22-10.200.16.10:51188.service: Deactivated successfully. Mar 7 01:27:23.794210 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:27:23.796080 systemd-logind[1773]: Removed session 11. Mar 7 01:27:28.871524 systemd[1]: Started sshd@9-10.200.20.23:22-10.200.16.10:51200.service - OpenSSH per-connection server daemon (10.200.16.10:51200). Mar 7 01:27:29.355170 sshd[6616]: Accepted publickey for core from 10.200.16.10 port 51200 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:29.356003 sshd[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:29.359690 systemd-logind[1773]: New session 12 of user core. Mar 7 01:27:29.365608 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:27:29.764993 sshd[6616]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:29.768437 systemd[1]: sshd@9-10.200.20.23:22-10.200.16.10:51200.service: Deactivated successfully. Mar 7 01:27:29.772065 systemd-logind[1773]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:27:29.772713 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:27:29.774483 systemd-logind[1773]: Removed session 12. Mar 7 01:27:34.851527 systemd[1]: Started sshd@10-10.200.20.23:22-10.200.16.10:44628.service - OpenSSH per-connection server daemon (10.200.16.10:44628). Mar 7 01:27:35.338060 sshd[6642]: Accepted publickey for core from 10.200.16.10 port 44628 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:35.339470 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:35.345844 systemd-logind[1773]: New session 13 of user core. Mar 7 01:27:35.350918 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:27:35.748812 sshd[6642]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:35.752106 systemd[1]: sshd@10-10.200.20.23:22-10.200.16.10:44628.service: Deactivated successfully. Mar 7 01:27:35.755402 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:27:35.757673 systemd-logind[1773]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:27:35.758683 systemd-logind[1773]: Removed session 13. Mar 7 01:27:35.833539 systemd[1]: Started sshd@11-10.200.20.23:22-10.200.16.10:44640.service - OpenSSH per-connection server daemon (10.200.16.10:44640). Mar 7 01:27:36.320404 sshd[6656]: Accepted publickey for core from 10.200.16.10 port 44640 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:36.321358 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:36.325095 systemd-logind[1773]: New session 14 of user core. Mar 7 01:27:36.329534 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:27:36.792928 sshd[6656]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:36.797029 systemd[1]: sshd@11-10.200.20.23:22-10.200.16.10:44640.service: Deactivated successfully. Mar 7 01:27:36.800635 systemd-logind[1773]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:27:36.801556 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:27:36.803027 systemd-logind[1773]: Removed session 14. Mar 7 01:27:36.879530 systemd[1]: Started sshd@12-10.200.20.23:22-10.200.16.10:44656.service - OpenSSH per-connection server daemon (10.200.16.10:44656). Mar 7 01:27:37.367387 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 44656 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:37.368620 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:37.372665 systemd-logind[1773]: New session 15 of user core. Mar 7 01:27:37.377669 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:27:37.806735 sshd[6684]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:37.809781 systemd[1]: sshd@12-10.200.20.23:22-10.200.16.10:44656.service: Deactivated successfully. Mar 7 01:27:37.810106 systemd-logind[1773]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:27:37.817883 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:27:37.820891 systemd-logind[1773]: Removed session 15. Mar 7 01:27:42.891542 systemd[1]: Started sshd@13-10.200.20.23:22-10.200.16.10:38288.service - OpenSSH per-connection server daemon (10.200.16.10:38288). Mar 7 01:27:43.379903 sshd[6723]: Accepted publickey for core from 10.200.16.10 port 38288 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:43.380805 sshd[6723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:43.385073 systemd-logind[1773]: New session 16 of user core. Mar 7 01:27:43.390543 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:27:43.791185 sshd[6723]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:43.794119 systemd-logind[1773]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:27:43.794549 systemd[1]: sshd@13-10.200.20.23:22-10.200.16.10:38288.service: Deactivated successfully. Mar 7 01:27:43.798034 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:27:43.800418 systemd-logind[1773]: Removed session 16. Mar 7 01:27:43.873517 systemd[1]: Started sshd@14-10.200.20.23:22-10.200.16.10:38302.service - OpenSSH per-connection server daemon (10.200.16.10:38302). Mar 7 01:27:44.357937 sshd[6755]: Accepted publickey for core from 10.200.16.10 port 38302 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:44.358803 sshd[6755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:44.362376 systemd-logind[1773]: New session 17 of user core. Mar 7 01:27:44.368531 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:27:44.908999 sshd[6755]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:44.913641 systemd[1]: sshd@14-10.200.20.23:22-10.200.16.10:38302.service: Deactivated successfully. Mar 7 01:27:44.916635 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:27:44.917769 systemd-logind[1773]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:27:44.918989 systemd-logind[1773]: Removed session 17. Mar 7 01:27:44.978611 systemd[1]: Started sshd@15-10.200.20.23:22-10.200.16.10:38304.service - OpenSSH per-connection server daemon (10.200.16.10:38304). Mar 7 01:27:45.467618 sshd[6767]: Accepted publickey for core from 10.200.16.10 port 38304 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:45.468515 sshd[6767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:45.472411 systemd-logind[1773]: New session 18 of user core. Mar 7 01:27:45.481608 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:27:46.502523 sshd[6767]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:46.505273 systemd[1]: sshd@15-10.200.20.23:22-10.200.16.10:38304.service: Deactivated successfully. Mar 7 01:27:46.508907 systemd-logind[1773]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:27:46.509865 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:27:46.511647 systemd-logind[1773]: Removed session 18. Mar 7 01:27:46.586522 systemd[1]: Started sshd@16-10.200.20.23:22-10.200.16.10:38318.service - OpenSSH per-connection server daemon (10.200.16.10:38318). Mar 7 01:27:47.071247 sshd[6826]: Accepted publickey for core from 10.200.16.10 port 38318 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:47.074123 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:47.079353 systemd-logind[1773]: New session 19 of user core. Mar 7 01:27:47.084661 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:27:47.597656 sshd[6826]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:47.602605 systemd-logind[1773]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:27:47.603168 systemd[1]: sshd@16-10.200.20.23:22-10.200.16.10:38318.service: Deactivated successfully. Mar 7 01:27:47.606385 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:27:47.607749 systemd-logind[1773]: Removed session 19. Mar 7 01:27:47.681737 systemd[1]: Started sshd@17-10.200.20.23:22-10.200.16.10:38326.service - OpenSSH per-connection server daemon (10.200.16.10:38326). Mar 7 01:27:48.167888 sshd[6837]: Accepted publickey for core from 10.200.16.10 port 38326 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:48.169219 sshd[6837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:48.173197 systemd-logind[1773]: New session 20 of user core. Mar 7 01:27:48.177614 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:27:48.574295 sshd[6837]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:48.578335 systemd[1]: sshd@17-10.200.20.23:22-10.200.16.10:38326.service: Deactivated successfully. Mar 7 01:27:48.581713 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:27:48.583111 systemd-logind[1773]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:27:48.584048 systemd-logind[1773]: Removed session 20. Mar 7 01:27:53.659539 systemd[1]: Started sshd@18-10.200.20.23:22-10.200.16.10:41316.service - OpenSSH per-connection server daemon (10.200.16.10:41316). Mar 7 01:27:54.143410 sshd[6888]: Accepted publickey for core from 10.200.16.10 port 41316 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:54.144726 sshd[6888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:54.148780 systemd-logind[1773]: New session 21 of user core. Mar 7 01:27:54.153602 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:27:54.555528 sshd[6888]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:54.559714 systemd[1]: sshd@18-10.200.20.23:22-10.200.16.10:41316.service: Deactivated successfully. Mar 7 01:27:54.562798 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:27:54.564815 systemd-logind[1773]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:27:54.565829 systemd-logind[1773]: Removed session 21. Mar 7 01:27:59.646612 systemd[1]: Started sshd@19-10.200.20.23:22-10.200.16.10:41320.service - OpenSSH per-connection server daemon (10.200.16.10:41320). Mar 7 01:28:00.130352 sshd[6901]: Accepted publickey for core from 10.200.16.10 port 41320 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:28:00.131685 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:00.136133 systemd-logind[1773]: New session 22 of user core. Mar 7 01:28:00.143646 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:28:00.537738 sshd[6901]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:00.541498 systemd[1]: sshd@19-10.200.20.23:22-10.200.16.10:41320.service: Deactivated successfully. Mar 7 01:28:00.544403 systemd-logind[1773]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:28:00.544584 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:28:00.546517 systemd-logind[1773]: Removed session 22. Mar 7 01:28:05.622514 systemd[1]: Started sshd@20-10.200.20.23:22-10.200.16.10:42904.service - OpenSSH per-connection server daemon (10.200.16.10:42904). Mar 7 01:28:06.102325 sshd[6915]: Accepted publickey for core from 10.200.16.10 port 42904 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:28:06.103612 sshd[6915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:06.107377 systemd-logind[1773]: New session 23 of user core. Mar 7 01:28:06.110541 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:28:06.512901 sshd[6915]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:06.515852 systemd-logind[1773]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:28:06.516015 systemd[1]: sshd@20-10.200.20.23:22-10.200.16.10:42904.service: Deactivated successfully. Mar 7 01:28:06.519351 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:28:06.520631 systemd-logind[1773]: Removed session 23. Mar 7 01:28:11.597522 systemd[1]: Started sshd@21-10.200.20.23:22-10.200.16.10:41746.service - OpenSSH per-connection server daemon (10.200.16.10:41746). Mar 7 01:28:12.085400 sshd[6930]: Accepted publickey for core from 10.200.16.10 port 41746 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:28:12.086238 sshd[6930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:12.091943 systemd-logind[1773]: New session 24 of user core. Mar 7 01:28:12.093577 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:28:12.507095 sshd[6930]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:12.510806 systemd[1]: sshd@21-10.200.20.23:22-10.200.16.10:41746.service: Deactivated successfully. Mar 7 01:28:12.514055 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:28:12.516220 systemd-logind[1773]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:28:12.517702 systemd-logind[1773]: Removed session 24. Mar 7 01:28:15.135341 systemd[1]: run-containerd-runc-k8s.io-745abd936b0746d538c60f94e8a90b1f1f780ebba12f3663c3129fd82ad016bb-runc.dU5qa0.mount: Deactivated successfully. Mar 7 01:28:17.596551 systemd[1]: Started sshd@22-10.200.20.23:22-10.200.16.10:41754.service - OpenSSH per-connection server daemon (10.200.16.10:41754). Mar 7 01:28:18.082993 sshd[7019]: Accepted publickey for core from 10.200.16.10 port 41754 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:28:18.084426 sshd[7019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:18.088198 systemd-logind[1773]: New session 25 of user core. Mar 7 01:28:18.094558 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:28:18.510991 sshd[7019]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:18.514494 systemd[1]: sshd@22-10.200.20.23:22-10.200.16.10:41754.service: Deactivated successfully. Mar 7 01:28:18.517268 systemd-logind[1773]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:28:18.517788 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:28:18.519249 systemd-logind[1773]: Removed session 25.