Mar 7 01:26:49.176930 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 01:26:49.176952 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 01:26:49.176961 kernel: KASLR enabled Mar 7 01:26:49.176966 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 01:26:49.176974 kernel: printk: bootconsole [pl11] enabled Mar 7 01:26:49.176979 kernel: efi: EFI v2.7 by EDK II Mar 7 01:26:49.176987 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 7 01:26:49.176993 kernel: random: crng init done Mar 7 01:26:49.177002 kernel: ACPI: Early table checksum verification disabled Mar 7 01:26:49.177008 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 7 01:26:49.177015 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177021 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177029 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:26:49.177035 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177043 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177049 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177056 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177064 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177070 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177077 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 01:26:49.177083 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177090 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 01:26:49.177102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 7 01:26:49.177109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 7 01:26:49.177116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 7 01:26:49.177122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 7 01:26:49.177129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 7 01:26:49.177135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 7 01:26:49.177143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 7 01:26:49.177150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 7 01:26:49.177156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 7 01:26:49.177163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 7 01:26:49.177169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 7 01:26:49.177175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 7 01:26:49.177182 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 7 01:26:49.177188 kernel: Zone ranges: Mar 7 01:26:49.177195 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 01:26:49.177201 kernel: DMA32 empty Mar 7 01:26:49.177207 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:26:49.177214 kernel: Movable zone start for each node Mar 7 01:26:49.177224 kernel: Early memory node ranges Mar 7 01:26:49.177231 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 01:26:49.177238 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 7 01:26:49.177245 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 7 01:26:49.177252 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 7 01:26:49.177260 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 7 01:26:49.177267 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 7 01:26:49.177273 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:26:49.177280 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 01:26:49.177287 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 01:26:49.177294 kernel: psci: probing for conduit method from ACPI. Mar 7 01:26:49.177301 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 01:26:49.177307 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 01:26:49.177314 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 01:26:49.177321 kernel: psci: SMC Calling Convention v1.4 Mar 7 01:26:49.177328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 01:26:49.177335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 01:26:49.177343 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 01:26:49.177350 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 01:26:49.177357 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 01:26:49.177363 kernel: Detected PIPT I-cache on CPU0 Mar 7 01:26:49.177370 kernel: CPU features: detected: GIC system register CPU interface Mar 7 01:26:49.177377 kernel: CPU features: detected: Hardware dirty bit management Mar 7 01:26:49.177384 kernel: CPU features: detected: Spectre-BHB Mar 7 01:26:49.177391 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 01:26:49.177398 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 01:26:49.177404 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 01:26:49.177411 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 7 01:26:49.177420 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 01:26:49.177427 kernel: alternatives: applying boot alternatives Mar 7 01:26:49.177435 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:26:49.177442 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:26:49.177449 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:26:49.177456 kernel: Fallback order for Node 0: 0 Mar 7 01:26:49.177463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 7 01:26:49.177469 kernel: Policy zone: Normal Mar 7 01:26:49.177476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:26:49.177483 kernel: software IO TLB: area num 2. Mar 7 01:26:49.177490 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 7 01:26:49.177499 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 7 01:26:49.177506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:26:49.177513 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:26:49.177520 kernel: rcu: RCU event tracing is enabled. Mar 7 01:26:49.177527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:26:49.177534 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:26:49.177541 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:26:49.177548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:26:49.177555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:26:49.177562 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 01:26:49.177568 kernel: GICv3: 960 SPIs implemented Mar 7 01:26:49.177577 kernel: GICv3: 0 Extended SPIs implemented Mar 7 01:26:49.177584 kernel: Root IRQ handler: gic_handle_irq Mar 7 01:26:49.177590 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 01:26:49.177597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 01:26:49.177604 kernel: ITS: No ITS available, not enabling LPIs Mar 7 01:26:49.177611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:26:49.177618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:26:49.177625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 01:26:49.177632 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 01:26:49.177639 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 01:26:49.177646 kernel: Console: colour dummy device 80x25 Mar 7 01:26:49.177655 kernel: printk: console [tty1] enabled Mar 7 01:26:49.177662 kernel: ACPI: Core revision 20230628 Mar 7 01:26:49.177670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 01:26:49.177677 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:26:49.177684 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:26:49.177691 kernel: landlock: Up and running. Mar 7 01:26:49.177698 kernel: SELinux: Initializing. Mar 7 01:26:49.177705 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.177712 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.177720 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:26:49.177728 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:26:49.177735 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 7 01:26:49.177742 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 7 01:26:49.177749 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:26:49.177756 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:26:49.177763 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:26:49.177770 kernel: Remapping and enabling EFI services. Mar 7 01:26:49.177784 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:26:49.177791 kernel: Detected PIPT I-cache on CPU1 Mar 7 01:26:49.177799 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 01:26:49.177806 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:26:49.177815 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 01:26:49.177822 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:26:49.177830 kernel: SMP: Total of 2 processors activated. Mar 7 01:26:49.177837 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 01:26:49.177845 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 01:26:49.177854 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 01:26:49.177861 kernel: CPU features: detected: CRC32 instructions Mar 7 01:26:49.177868 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 01:26:49.177876 kernel: CPU features: detected: LSE atomic instructions Mar 7 01:26:49.177883 kernel: CPU features: detected: Privileged Access Never Mar 7 01:26:49.177891 kernel: CPU: All CPU(s) started at EL1 Mar 7 01:26:49.177898 kernel: alternatives: applying system-wide alternatives Mar 7 01:26:49.177905 kernel: devtmpfs: initialized Mar 7 01:26:49.177913 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:26:49.177922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:26:49.177929 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:26:49.177936 kernel: SMBIOS 3.1.0 present. Mar 7 01:26:49.177944 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 7 01:26:49.177951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:26:49.177959 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 01:26:49.177966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 01:26:49.177974 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 01:26:49.177981 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:26:49.177991 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 7 01:26:49.177998 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:26:49.178005 kernel: cpuidle: using governor menu Mar 7 01:26:49.178013 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 01:26:49.178020 kernel: ASID allocator initialised with 32768 entries Mar 7 01:26:49.178028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:26:49.178035 kernel: Serial: AMBA PL011 UART driver Mar 7 01:26:49.178042 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 01:26:49.178050 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 01:26:49.178058 kernel: Modules: 509008 pages in range for PLT usage Mar 7 01:26:49.178066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:26:49.178074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:26:49.178081 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 01:26:49.178088 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 01:26:49.180127 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:26:49.180139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:26:49.180146 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 01:26:49.180155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 01:26:49.180168 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:26:49.180176 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:26:49.180184 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:26:49.180191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:26:49.180199 kernel: ACPI: Interpreter enabled Mar 7 01:26:49.180206 kernel: ACPI: Using GIC for interrupt routing Mar 7 01:26:49.180214 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 01:26:49.180221 kernel: printk: console [ttyAMA0] enabled Mar 7 01:26:49.180228 kernel: printk: bootconsole [pl11] disabled Mar 7 01:26:49.180237 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 01:26:49.180245 kernel: iommu: Default domain type: Translated Mar 7 01:26:49.180252 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 01:26:49.180260 kernel: efivars: Registered efivars operations Mar 7 01:26:49.180267 kernel: vgaarb: loaded Mar 7 01:26:49.180274 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 01:26:49.180282 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:26:49.180290 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:26:49.180297 kernel: pnp: PnP ACPI init Mar 7 01:26:49.180307 kernel: pnp: PnP ACPI: found 0 devices Mar 7 01:26:49.180314 kernel: NET: Registered PF_INET protocol family Mar 7 01:26:49.180322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:26:49.180330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:26:49.180338 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:26:49.180346 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:26:49.180354 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:26:49.180361 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:26:49.180369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.180378 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.180385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:26:49.180392 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:26:49.180400 kernel: kvm [1]: HYP mode not available Mar 7 01:26:49.180407 kernel: Initialise system trusted keyrings Mar 7 01:26:49.180415 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:26:49.180422 kernel: Key type asymmetric registered Mar 7 01:26:49.180429 kernel: Asymmetric key parser 'x509' registered Mar 7 01:26:49.180437 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:26:49.180446 kernel: io scheduler mq-deadline registered Mar 7 01:26:49.180453 kernel: io scheduler kyber registered Mar 7 01:26:49.180460 kernel: io scheduler bfq registered Mar 7 01:26:49.180468 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:26:49.180475 kernel: thunder_xcv, ver 1.0 Mar 7 01:26:49.180483 kernel: thunder_bgx, ver 1.0 Mar 7 01:26:49.180490 kernel: nicpf, ver 1.0 Mar 7 01:26:49.180497 kernel: nicvf, ver 1.0 Mar 7 01:26:49.180643 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 01:26:49.180716 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T01:26:48 UTC (1772846808) Mar 7 01:26:49.180726 kernel: efifb: probing for efifb Mar 7 01:26:49.180734 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:26:49.180742 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:26:49.180749 kernel: efifb: scrolling: redraw Mar 7 01:26:49.180757 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:26:49.180764 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:26:49.180772 kernel: fb0: EFI VGA frame buffer device Mar 7 01:26:49.180781 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 01:26:49.180789 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:26:49.180796 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 7 01:26:49.180804 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 01:26:49.180811 kernel: watchdog: Hard watchdog permanently disabled Mar 7 01:26:49.180819 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:26:49.180826 kernel: Segment Routing with IPv6 Mar 7 01:26:49.180834 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:26:49.180841 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:26:49.180850 kernel: Key type dns_resolver registered Mar 7 01:26:49.180858 kernel: registered taskstats version 1 Mar 7 01:26:49.180865 kernel: Loading compiled-in X.509 certificates Mar 7 01:26:49.180873 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 01:26:49.180880 kernel: Key type .fscrypt registered Mar 7 01:26:49.180887 kernel: Key type fscrypt-provisioning registered Mar 7 01:26:49.180895 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:26:49.180902 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:26:49.180909 kernel: ima: No architecture policies found Mar 7 01:26:49.180918 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 01:26:49.180926 kernel: clk: Disabling unused clocks Mar 7 01:26:49.180933 kernel: Freeing unused kernel memory: 39424K Mar 7 01:26:49.180941 kernel: Run /init as init process Mar 7 01:26:49.180948 kernel: with arguments: Mar 7 01:26:49.180955 kernel: /init Mar 7 01:26:49.180963 kernel: with environment: Mar 7 01:26:49.180970 kernel: HOME=/ Mar 7 01:26:49.180978 kernel: TERM=linux Mar 7 01:26:49.180987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:26:49.180998 systemd[1]: Detected virtualization microsoft. Mar 7 01:26:49.181007 systemd[1]: Detected architecture arm64. Mar 7 01:26:49.181014 systemd[1]: Running in initrd. Mar 7 01:26:49.181022 systemd[1]: No hostname configured, using default hostname. Mar 7 01:26:49.181029 systemd[1]: Hostname set to . Mar 7 01:26:49.181038 systemd[1]: Initializing machine ID from random generator. Mar 7 01:26:49.181047 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:26:49.181056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:26:49.181064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:26:49.181073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:26:49.181081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:26:49.181089 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:26:49.181108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:26:49.181118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:26:49.181129 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:26:49.181137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:26:49.181145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:26:49.181153 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:26:49.181161 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:26:49.181169 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:26:49.181177 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:26:49.181185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:26:49.181195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:26:49.181203 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:26:49.181211 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:26:49.181220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:26:49.181228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:26:49.181236 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:26:49.181244 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:26:49.181252 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:26:49.181262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:26:49.181271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:26:49.181279 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:26:49.181287 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:26:49.181295 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:26:49.181321 systemd-journald[217]: Collecting audit messages is disabled. Mar 7 01:26:49.181342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:49.181351 systemd-journald[217]: Journal started Mar 7 01:26:49.181369 systemd-journald[217]: Runtime Journal (/run/log/journal/93bc658ef55e40fca4b5680f72ab7fd0) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:26:49.182445 systemd-modules-load[218]: Inserted module 'overlay' Mar 7 01:26:49.203002 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:26:49.203026 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:26:49.207354 kernel: Bridge firewalling registered Mar 7 01:26:49.207203 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 7 01:26:49.214255 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:26:49.219119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:26:49.229536 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:26:49.238040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:26:49.246262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:49.262311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:49.268225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:26:49.290070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:26:49.302241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:26:49.318713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:49.324579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:26:49.329435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:26:49.342481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:26:49.365353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:26:49.379432 dracut-cmdline[251]: dracut-dracut-053 Mar 7 01:26:49.391179 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:26:49.386365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:26:49.432575 systemd-resolved[258]: Positive Trust Anchors: Mar 7 01:26:49.432584 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:26:49.437669 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:26:49.438392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:26:49.440753 systemd-resolved[258]: Defaulting to hostname 'linux'. Mar 7 01:26:49.513143 kernel: SCSI subsystem initialized Mar 7 01:26:49.447861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:26:49.460764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:26:49.495229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:26:49.530218 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:26:49.537118 kernel: iscsi: registered transport (tcp) Mar 7 01:26:49.552945 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:26:49.552963 kernel: QLogic iSCSI HBA Driver Mar 7 01:26:49.586286 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:26:49.597294 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:26:49.624449 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:26:49.624515 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:26:49.629420 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:26:49.676115 kernel: raid6: neonx8 gen() 15807 MB/s Mar 7 01:26:49.695100 kernel: raid6: neonx4 gen() 15701 MB/s Mar 7 01:26:49.714098 kernel: raid6: neonx2 gen() 13226 MB/s Mar 7 01:26:49.734099 kernel: raid6: neonx1 gen() 10485 MB/s Mar 7 01:26:49.753098 kernel: raid6: int64x8 gen() 6988 MB/s Mar 7 01:26:49.772098 kernel: raid6: int64x4 gen() 7365 MB/s Mar 7 01:26:49.792098 kernel: raid6: int64x2 gen() 6146 MB/s Mar 7 01:26:49.813828 kernel: raid6: int64x1 gen() 5072 MB/s Mar 7 01:26:49.813839 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Mar 7 01:26:49.835617 kernel: raid6: .... xor() 12052 MB/s, rmw enabled Mar 7 01:26:49.835637 kernel: raid6: using neon recovery algorithm Mar 7 01:26:49.846303 kernel: xor: measuring software checksum speed Mar 7 01:26:49.846322 kernel: 8regs : 19778 MB/sec Mar 7 01:26:49.849115 kernel: 32regs : 19660 MB/sec Mar 7 01:26:49.854532 kernel: arm64_neon : 26322 MB/sec Mar 7 01:26:49.854546 kernel: xor: using function: arm64_neon (26322 MB/sec) Mar 7 01:26:49.904154 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:26:49.913187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:26:49.927216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:26:49.943288 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 7 01:26:49.947466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:26:49.963211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:26:49.983002 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Mar 7 01:26:50.010393 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:26:50.022257 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:26:50.059772 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:26:50.076289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:26:50.103690 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:26:50.115626 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:26:50.127607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:26:50.134142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:26:50.152341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:26:50.169897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:26:50.182754 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 01:26:50.182777 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:26:50.170021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:50.217287 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 01:26:50.217313 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:26:50.217456 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:26:50.217467 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:26:50.217294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:50.247044 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:26:50.247066 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:26:50.247077 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 01:26:50.232301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:50.232468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.274537 kernel: PTP clock support registered Mar 7 01:26:50.252483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.286399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.305364 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:26:50.305395 kernel: scsi host0: storvsc_host_t Mar 7 01:26:50.305884 kernel: scsi host1: storvsc_host_t Mar 7 01:26:50.297951 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:26:50.316110 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:26:50.322105 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:26:50.322989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:50.327352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.344474 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:26:50.344497 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:26:50.351676 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: VF slot 1 added Mar 7 01:26:50.354493 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:26:50.360107 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:26:50.360139 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:26:49.939807 systemd-resolved[258]: Clock change detected. Flushing caches. Mar 7 01:26:49.959279 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:26:49.959298 systemd-journald[217]: Time jumped backwards, rotating. Mar 7 01:26:49.959334 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:26:49.959471 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:26:49.959480 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:26:49.959569 kernel: hv_pci 46b41545-3c29-4998-b17c-323fd65705fa: PCI VMBus probing: Using version 0x10004 Mar 7 01:26:49.959664 kernel: hv_pci 46b41545-3c29-4998-b17c-323fd65705fa: PCI host bridge to bus 3c29:00 Mar 7 01:26:49.959744 kernel: pci_bus 3c29:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 01:26:49.959838 kernel: pci_bus 3c29:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:26:49.959916 kernel: pci 3c29:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 7 01:26:49.944137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.006814 kernel: pci 3c29:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:26:50.006851 kernel: pci 3c29:00:02.0: enabling Extended Tags Mar 7 01:26:50.021952 kernel: pci 3c29:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3c29:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 7 01:26:50.022239 kernel: pci_bus 3c29:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:26:50.026141 kernel: pci 3c29:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:26:50.027029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:26:50.027187 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:26:50.027305 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:26:50.027394 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:26:50.027526 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:26:50.027626 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:26:50.045837 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:26:50.045880 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:26:50.077010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:26:50.082299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.102228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:50.121319 kernel: mlx5_core 3c29:00:02.0: enabling device (0000 -> 0002) Mar 7 01:26:50.121508 kernel: mlx5_core 3c29:00:02.0: firmware version: 16.30.5026 Mar 7 01:26:50.135758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:50.303316 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: VF registering: eth1 Mar 7 01:26:50.303554 kernel: mlx5_core 3c29:00:02.0 eth1: joined to eth0 Mar 7 01:26:50.310025 kernel: mlx5_core 3c29:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 01:26:50.320015 kernel: mlx5_core 3c29:00:02.0 enP15401s1: renamed from eth1 Mar 7 01:26:51.520221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:26:51.540003 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (493) Mar 7 01:26:51.554432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:26:51.559827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:26:51.586271 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:26:51.631015 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Mar 7 01:26:51.647845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:26:51.671748 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:26:52.614006 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:26:52.614061 disk-uuid[593]: The operation has completed successfully. Mar 7 01:26:52.674337 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:26:52.678155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:26:52.716104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:26:52.726155 sh[719]: Success Mar 7 01:26:52.753021 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 01:26:53.002884 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:26:53.016432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:26:53.021255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:26:53.051891 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 01:26:53.051940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:53.057411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:26:53.061287 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:26:53.064516 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:26:53.363022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:26:53.366965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:26:53.385214 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:26:53.394283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:26:53.422891 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:53.422947 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:53.422958 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:53.460013 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:53.468353 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:26:53.480006 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:53.487336 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:26:53.496442 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:26:53.515501 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:26:53.526018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:26:53.561762 systemd-networkd[904]: lo: Link UP Mar 7 01:26:53.561771 systemd-networkd[904]: lo: Gained carrier Mar 7 01:26:53.563276 systemd-networkd[904]: Enumeration completed Mar 7 01:26:53.565792 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:26:53.566544 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:26:53.566547 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:26:53.570563 systemd[1]: Reached target network.target - Network. Mar 7 01:26:53.645008 kernel: mlx5_core 3c29:00:02.0 enP15401s1: Link up Mar 7 01:26:53.682638 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: Data path switched to VF: enP15401s1 Mar 7 01:26:53.682337 systemd-networkd[904]: enP15401s1: Link UP Mar 7 01:26:53.682409 systemd-networkd[904]: eth0: Link UP Mar 7 01:26:53.682530 systemd-networkd[904]: eth0: Gained carrier Mar 7 01:26:53.682538 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:26:53.692216 systemd-networkd[904]: enP15401s1: Gained carrier Mar 7 01:26:53.711042 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:26:54.468984 ignition[902]: Ignition 2.19.0 Mar 7 01:26:54.469005 ignition[902]: Stage: fetch-offline Mar 7 01:26:54.471360 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:26:54.469045 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.469053 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.469147 ignition[902]: parsed url from cmdline: "" Mar 7 01:26:54.469150 ignition[902]: no config URL provided Mar 7 01:26:54.469154 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:26:54.493195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:26:54.469160 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:26:54.469164 ignition[902]: failed to fetch config: resource requires networking Mar 7 01:26:54.469322 ignition[902]: Ignition finished successfully Mar 7 01:26:54.510999 ignition[914]: Ignition 2.19.0 Mar 7 01:26:54.511005 ignition[914]: Stage: fetch Mar 7 01:26:54.511168 ignition[914]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.511177 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.511265 ignition[914]: parsed url from cmdline: "" Mar 7 01:26:54.511268 ignition[914]: no config URL provided Mar 7 01:26:54.511272 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:26:54.511278 ignition[914]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:26:54.511298 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:26:54.632370 ignition[914]: GET result: OK Mar 7 01:26:54.632464 ignition[914]: config has been read from IMDS userdata Mar 7 01:26:54.632534 ignition[914]: parsing config with SHA512: 7199f285620b7fcdefca0200db3bfa381ab454bb325e1c416ff1c4676b810a7acea8cf8c3527998d3989f6e8cc7ba2cb88d3f76d4b51f797a818635ca0e96c91 Mar 7 01:26:54.640702 unknown[914]: fetched base config from "system" Mar 7 01:26:54.641768 unknown[914]: fetched base config from "system" Mar 7 01:26:54.642172 ignition[914]: fetch: fetch complete Mar 7 01:26:54.641782 unknown[914]: fetched user config from "azure" Mar 7 01:26:54.642176 ignition[914]: fetch: fetch passed Mar 7 01:26:54.644194 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:26:54.642223 ignition[914]: Ignition finished successfully Mar 7 01:26:54.665186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:26:54.685322 ignition[920]: Ignition 2.19.0 Mar 7 01:26:54.685332 ignition[920]: Stage: kargs Mar 7 01:26:54.690419 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:26:54.685538 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.685547 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.689262 ignition[920]: kargs: kargs passed Mar 7 01:26:54.708159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:26:54.689313 ignition[920]: Ignition finished successfully Mar 7 01:26:54.726527 ignition[926]: Ignition 2.19.0 Mar 7 01:26:54.726535 ignition[926]: Stage: disks Mar 7 01:26:54.730421 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:26:54.726689 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.735403 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:26:54.726697 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.742334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:26:54.727598 ignition[926]: disks: disks passed Mar 7 01:26:54.751746 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:26:54.727640 ignition[926]: Ignition finished successfully Mar 7 01:26:54.759829 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:26:54.768877 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:26:54.790235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:26:54.864824 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:26:54.873945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:26:54.888205 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:26:54.942018 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 01:26:54.942420 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:26:54.946446 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:26:54.967106 systemd-networkd[904]: eth0: Gained IPv6LL Mar 7 01:26:54.994054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:26:55.013004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 7 01:26:55.025002 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:55.025049 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:55.025060 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:55.029160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:26:55.038133 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:26:55.046924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:26:55.066565 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:55.046958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:26:55.069473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:26:55.075521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:26:55.086166 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:26:55.674042 coreos-metadata[960]: Mar 07 01:26:55.674 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:26:55.680762 coreos-metadata[960]: Mar 07 01:26:55.680 INFO Fetch successful Mar 7 01:26:55.680762 coreos-metadata[960]: Mar 07 01:26:55.680 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:26:55.693186 coreos-metadata[960]: Mar 07 01:26:55.691 INFO Fetch successful Mar 7 01:26:55.709077 coreos-metadata[960]: Mar 07 01:26:55.709 INFO wrote hostname ci-4081.3.6-n-24b0a814a4 to /sysroot/etc/hostname Mar 7 01:26:55.716795 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:26:55.916515 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:26:55.967125 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:26:55.995363 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:26:56.002385 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:26:57.111405 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:26:57.123155 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:26:57.128927 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:26:57.149452 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:57.145076 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:26:57.173034 ignition[1062]: INFO : Ignition 2.19.0 Mar 7 01:26:57.176816 ignition[1062]: INFO : Stage: mount Mar 7 01:26:57.176816 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:57.176816 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:57.191330 ignition[1062]: INFO : mount: mount passed Mar 7 01:26:57.191330 ignition[1062]: INFO : Ignition finished successfully Mar 7 01:26:57.186838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:26:57.195569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:26:57.215329 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:26:57.226937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:26:57.250209 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Mar 7 01:26:57.260722 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:57.260746 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:57.264312 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:57.270999 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:57.273232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:26:57.300945 ignition[1091]: INFO : Ignition 2.19.0 Mar 7 01:26:57.300945 ignition[1091]: INFO : Stage: files Mar 7 01:26:57.307110 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:57.307110 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:57.307110 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:26:57.320971 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:26:57.320971 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:26:57.393403 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 01:26:57.394394 unknown[1091]: wrote ssh authorized keys file for user: core Mar 7 01:26:57.443718 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:26:57.632633 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:26:57.632633 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 01:26:58.054231 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:26:58.512786 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:58.512786 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:26:58.557376 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: files passed Mar 7 01:26:58.566154 ignition[1091]: INFO : Ignition finished successfully Mar 7 01:26:58.566840 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:26:58.597740 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:26:58.609158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:26:58.636402 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:26:58.636502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:26:58.668937 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.668937 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.683012 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.676783 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:26:58.688522 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:26:58.711228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:26:58.736468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:26:58.736592 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:26:58.746184 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:26:58.755386 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:26:58.763978 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:26:58.777259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:26:58.796470 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:26:58.808326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:26:58.823668 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:26:58.828976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:26:58.838575 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:26:58.847226 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:26:58.847384 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:26:58.859474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:26:58.868650 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:26:58.876372 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:26:58.884789 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:26:58.894340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:26:58.903547 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:26:58.912448 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:26:58.922316 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:26:58.931730 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:26:58.940071 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:26:58.947337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:26:58.947496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:26:58.959066 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:26:58.967971 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:26:58.977502 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:26:58.985906 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:26:58.991459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:26:58.991614 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:26:59.005673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:26:59.005821 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:26:59.015265 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:26:59.015413 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:26:59.023937 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:26:59.024095 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:26:59.052621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:26:59.062263 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:26:59.069792 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:26:59.089660 ignition[1143]: INFO : Ignition 2.19.0 Mar 7 01:26:59.089660 ignition[1143]: INFO : Stage: umount Mar 7 01:26:59.089660 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:59.089660 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:59.089660 ignition[1143]: INFO : umount: umount passed Mar 7 01:26:59.089660 ignition[1143]: INFO : Ignition finished successfully Mar 7 01:26:59.070041 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:26:59.085120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:26:59.085279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:26:59.100219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:26:59.101030 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:26:59.103014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:26:59.110439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:26:59.112021 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:26:59.125467 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:26:59.125543 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:26:59.137533 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:26:59.137588 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:26:59.142035 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:26:59.142071 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:26:59.146243 systemd[1]: Stopped target network.target - Network. Mar 7 01:26:59.154895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:26:59.154967 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:26:59.164958 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:26:59.172730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:26:59.176507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:26:59.182116 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:26:59.190081 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:26:59.198650 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:26:59.198697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:26:59.207781 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:26:59.207813 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:26:59.217899 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:26:59.217945 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:26:59.225789 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:26:59.225823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:26:59.234664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:26:59.243504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:26:59.252023 systemd-networkd[904]: eth0: DHCPv6 lease lost Mar 7 01:26:59.256405 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:26:59.256584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:26:59.266456 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:26:59.266548 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:26:59.276717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:26:59.276776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:26:59.450765 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: Data path switched from VF: enP15401s1 Mar 7 01:26:59.303224 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:26:59.309961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:26:59.310037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:26:59.318687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:26:59.318726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:26:59.327008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:26:59.327051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:26:59.335689 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:26:59.335730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:26:59.344521 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:26:59.379630 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:26:59.379809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:26:59.390205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:26:59.390248 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:26:59.401798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:26:59.401838 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:26:59.410614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:26:59.410657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:26:59.423481 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:26:59.423520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:26:59.438498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:26:59.438574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:59.470183 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:26:59.481242 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:26:59.481312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:26:59.496432 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:26:59.496488 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:26:59.501872 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:26:59.501908 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:26:59.511487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:59.511522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:59.522124 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:26:59.524011 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:26:59.531288 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:26:59.531365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:26:59.544499 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:26:59.673911 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 7 01:26:59.544584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:26:59.554531 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:26:59.562685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:26:59.562759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:26:59.584200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:26:59.598021 systemd[1]: Switching root. Mar 7 01:26:59.700942 systemd-journald[217]: Journal stopped Mar 7 01:26:49.176930 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 01:26:49.176952 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 01:26:49.176961 kernel: KASLR enabled Mar 7 01:26:49.176966 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 7 01:26:49.176974 kernel: printk: bootconsole [pl11] enabled Mar 7 01:26:49.176979 kernel: efi: EFI v2.7 by EDK II Mar 7 01:26:49.176987 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 7 01:26:49.176993 kernel: random: crng init done Mar 7 01:26:49.177002 kernel: ACPI: Early table checksum verification disabled Mar 7 01:26:49.177008 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 7 01:26:49.177015 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177021 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177029 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:26:49.177035 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177043 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177049 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177056 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177064 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177070 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177077 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 7 01:26:49.177083 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:26:49.177090 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 7 01:26:49.177102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 7 01:26:49.177109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 7 01:26:49.177116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 7 01:26:49.177122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 7 01:26:49.177129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 7 01:26:49.177135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 7 01:26:49.177143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 7 01:26:49.177150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 7 01:26:49.177156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 7 01:26:49.177163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 7 01:26:49.177169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 7 01:26:49.177175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 7 01:26:49.177182 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 7 01:26:49.177188 kernel: Zone ranges: Mar 7 01:26:49.177195 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 7 01:26:49.177201 kernel: DMA32 empty Mar 7 01:26:49.177207 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:26:49.177214 kernel: Movable zone start for each node Mar 7 01:26:49.177224 kernel: Early memory node ranges Mar 7 01:26:49.177231 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 7 01:26:49.177238 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 7 01:26:49.177245 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 7 01:26:49.177252 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 7 01:26:49.177260 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 7 01:26:49.177267 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 7 01:26:49.177273 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 7 01:26:49.177280 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 7 01:26:49.177287 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 7 01:26:49.177294 kernel: psci: probing for conduit method from ACPI. Mar 7 01:26:49.177301 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 01:26:49.177307 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 01:26:49.177314 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 7 01:26:49.177321 kernel: psci: SMC Calling Convention v1.4 Mar 7 01:26:49.177328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 7 01:26:49.177335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 7 01:26:49.177343 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 01:26:49.177350 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 01:26:49.177357 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 01:26:49.177363 kernel: Detected PIPT I-cache on CPU0 Mar 7 01:26:49.177370 kernel: CPU features: detected: GIC system register CPU interface Mar 7 01:26:49.177377 kernel: CPU features: detected: Hardware dirty bit management Mar 7 01:26:49.177384 kernel: CPU features: detected: Spectre-BHB Mar 7 01:26:49.177391 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 01:26:49.177398 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 01:26:49.177404 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 01:26:49.177411 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 7 01:26:49.177420 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 01:26:49.177427 kernel: alternatives: applying boot alternatives Mar 7 01:26:49.177435 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:26:49.177442 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:26:49.177449 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:26:49.177456 kernel: Fallback order for Node 0: 0 Mar 7 01:26:49.177463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 7 01:26:49.177469 kernel: Policy zone: Normal Mar 7 01:26:49.177476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:26:49.177483 kernel: software IO TLB: area num 2. Mar 7 01:26:49.177490 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 7 01:26:49.177499 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 7 01:26:49.177506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:26:49.177513 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:26:49.177520 kernel: rcu: RCU event tracing is enabled. Mar 7 01:26:49.177527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:26:49.177534 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:26:49.177541 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:26:49.177548 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:26:49.177555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:26:49.177562 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 01:26:49.177568 kernel: GICv3: 960 SPIs implemented Mar 7 01:26:49.177577 kernel: GICv3: 0 Extended SPIs implemented Mar 7 01:26:49.177584 kernel: Root IRQ handler: gic_handle_irq Mar 7 01:26:49.177590 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 7 01:26:49.177597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 7 01:26:49.177604 kernel: ITS: No ITS available, not enabling LPIs Mar 7 01:26:49.177611 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:26:49.177618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:26:49.177625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 01:26:49.177632 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 01:26:49.177639 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 01:26:49.177646 kernel: Console: colour dummy device 80x25 Mar 7 01:26:49.177655 kernel: printk: console [tty1] enabled Mar 7 01:26:49.177662 kernel: ACPI: Core revision 20230628 Mar 7 01:26:49.177670 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 01:26:49.177677 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:26:49.177684 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:26:49.177691 kernel: landlock: Up and running. Mar 7 01:26:49.177698 kernel: SELinux: Initializing. Mar 7 01:26:49.177705 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.177712 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.177720 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:26:49.177728 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:26:49.177735 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 7 01:26:49.177742 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 7 01:26:49.177749 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:26:49.177756 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:26:49.177763 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:26:49.177770 kernel: Remapping and enabling EFI services. Mar 7 01:26:49.177784 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:26:49.177791 kernel: Detected PIPT I-cache on CPU1 Mar 7 01:26:49.177799 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 7 01:26:49.177806 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 01:26:49.177815 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 01:26:49.177822 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:26:49.177830 kernel: SMP: Total of 2 processors activated. Mar 7 01:26:49.177837 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 01:26:49.177845 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 7 01:26:49.177854 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 01:26:49.177861 kernel: CPU features: detected: CRC32 instructions Mar 7 01:26:49.177868 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 01:26:49.177876 kernel: CPU features: detected: LSE atomic instructions Mar 7 01:26:49.177883 kernel: CPU features: detected: Privileged Access Never Mar 7 01:26:49.177891 kernel: CPU: All CPU(s) started at EL1 Mar 7 01:26:49.177898 kernel: alternatives: applying system-wide alternatives Mar 7 01:26:49.177905 kernel: devtmpfs: initialized Mar 7 01:26:49.177913 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:26:49.177922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:26:49.177929 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:26:49.177936 kernel: SMBIOS 3.1.0 present. Mar 7 01:26:49.177944 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 7 01:26:49.177951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:26:49.177959 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 01:26:49.177966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 01:26:49.177974 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 01:26:49.177981 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:26:49.177991 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 7 01:26:49.177998 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:26:49.178005 kernel: cpuidle: using governor menu Mar 7 01:26:49.178013 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 01:26:49.178020 kernel: ASID allocator initialised with 32768 entries Mar 7 01:26:49.178028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:26:49.178035 kernel: Serial: AMBA PL011 UART driver Mar 7 01:26:49.178042 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 01:26:49.178050 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 01:26:49.178058 kernel: Modules: 509008 pages in range for PLT usage Mar 7 01:26:49.178066 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:26:49.178074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:26:49.178081 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 01:26:49.178088 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 01:26:49.180127 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:26:49.180139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:26:49.180146 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 01:26:49.180155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 01:26:49.180168 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:26:49.180176 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:26:49.180184 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:26:49.180191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:26:49.180199 kernel: ACPI: Interpreter enabled Mar 7 01:26:49.180206 kernel: ACPI: Using GIC for interrupt routing Mar 7 01:26:49.180214 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 7 01:26:49.180221 kernel: printk: console [ttyAMA0] enabled Mar 7 01:26:49.180228 kernel: printk: bootconsole [pl11] disabled Mar 7 01:26:49.180237 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 7 01:26:49.180245 kernel: iommu: Default domain type: Translated Mar 7 01:26:49.180252 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 01:26:49.180260 kernel: efivars: Registered efivars operations Mar 7 01:26:49.180267 kernel: vgaarb: loaded Mar 7 01:26:49.180274 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 01:26:49.180282 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:26:49.180290 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:26:49.180297 kernel: pnp: PnP ACPI init Mar 7 01:26:49.180307 kernel: pnp: PnP ACPI: found 0 devices Mar 7 01:26:49.180314 kernel: NET: Registered PF_INET protocol family Mar 7 01:26:49.180322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:26:49.180330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:26:49.180338 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:26:49.180346 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:26:49.180354 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:26:49.180361 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:26:49.180369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.180378 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:26:49.180385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:26:49.180392 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:26:49.180400 kernel: kvm [1]: HYP mode not available Mar 7 01:26:49.180407 kernel: Initialise system trusted keyrings Mar 7 01:26:49.180415 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:26:49.180422 kernel: Key type asymmetric registered Mar 7 01:26:49.180429 kernel: Asymmetric key parser 'x509' registered Mar 7 01:26:49.180437 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:26:49.180446 kernel: io scheduler mq-deadline registered Mar 7 01:26:49.180453 kernel: io scheduler kyber registered Mar 7 01:26:49.180460 kernel: io scheduler bfq registered Mar 7 01:26:49.180468 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:26:49.180475 kernel: thunder_xcv, ver 1.0 Mar 7 01:26:49.180483 kernel: thunder_bgx, ver 1.0 Mar 7 01:26:49.180490 kernel: nicpf, ver 1.0 Mar 7 01:26:49.180497 kernel: nicvf, ver 1.0 Mar 7 01:26:49.180643 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 01:26:49.180716 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T01:26:48 UTC (1772846808) Mar 7 01:26:49.180726 kernel: efifb: probing for efifb Mar 7 01:26:49.180734 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:26:49.180742 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:26:49.180749 kernel: efifb: scrolling: redraw Mar 7 01:26:49.180757 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:26:49.180764 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:26:49.180772 kernel: fb0: EFI VGA frame buffer device Mar 7 01:26:49.180781 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 7 01:26:49.180789 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:26:49.180796 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 7 01:26:49.180804 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 01:26:49.180811 kernel: watchdog: Hard watchdog permanently disabled Mar 7 01:26:49.180819 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:26:49.180826 kernel: Segment Routing with IPv6 Mar 7 01:26:49.180834 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:26:49.180841 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:26:49.180850 kernel: Key type dns_resolver registered Mar 7 01:26:49.180858 kernel: registered taskstats version 1 Mar 7 01:26:49.180865 kernel: Loading compiled-in X.509 certificates Mar 7 01:26:49.180873 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 01:26:49.180880 kernel: Key type .fscrypt registered Mar 7 01:26:49.180887 kernel: Key type fscrypt-provisioning registered Mar 7 01:26:49.180895 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:26:49.180902 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:26:49.180909 kernel: ima: No architecture policies found Mar 7 01:26:49.180918 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 01:26:49.180926 kernel: clk: Disabling unused clocks Mar 7 01:26:49.180933 kernel: Freeing unused kernel memory: 39424K Mar 7 01:26:49.180941 kernel: Run /init as init process Mar 7 01:26:49.180948 kernel: with arguments: Mar 7 01:26:49.180955 kernel: /init Mar 7 01:26:49.180963 kernel: with environment: Mar 7 01:26:49.180970 kernel: HOME=/ Mar 7 01:26:49.180978 kernel: TERM=linux Mar 7 01:26:49.180987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:26:49.180998 systemd[1]: Detected virtualization microsoft. Mar 7 01:26:49.181007 systemd[1]: Detected architecture arm64. Mar 7 01:26:49.181014 systemd[1]: Running in initrd. Mar 7 01:26:49.181022 systemd[1]: No hostname configured, using default hostname. Mar 7 01:26:49.181029 systemd[1]: Hostname set to . Mar 7 01:26:49.181038 systemd[1]: Initializing machine ID from random generator. Mar 7 01:26:49.181047 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:26:49.181056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:26:49.181064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:26:49.181073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:26:49.181081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:26:49.181089 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:26:49.181108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:26:49.181118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:26:49.181129 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:26:49.181137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:26:49.181145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:26:49.181153 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:26:49.181161 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:26:49.181169 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:26:49.181177 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:26:49.181185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:26:49.181195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:26:49.181203 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:26:49.181211 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:26:49.181220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:26:49.181228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:26:49.181236 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:26:49.181244 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:26:49.181252 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:26:49.181262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:26:49.181271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:26:49.181279 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:26:49.181287 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:26:49.181295 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:26:49.181321 systemd-journald[217]: Collecting audit messages is disabled. Mar 7 01:26:49.181342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:49.181351 systemd-journald[217]: Journal started Mar 7 01:26:49.181369 systemd-journald[217]: Runtime Journal (/run/log/journal/93bc658ef55e40fca4b5680f72ab7fd0) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:26:49.182445 systemd-modules-load[218]: Inserted module 'overlay' Mar 7 01:26:49.203002 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:26:49.203026 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:26:49.207354 kernel: Bridge firewalling registered Mar 7 01:26:49.207203 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 7 01:26:49.214255 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:26:49.219119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:26:49.229536 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:26:49.238040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:26:49.246262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:49.262311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:49.268225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:26:49.290070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:26:49.302241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:26:49.318713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:49.324579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:26:49.329435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:26:49.342481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:26:49.365353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:26:49.379432 dracut-cmdline[251]: dracut-dracut-053 Mar 7 01:26:49.391179 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 01:26:49.386365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:26:49.432575 systemd-resolved[258]: Positive Trust Anchors: Mar 7 01:26:49.432584 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:26:49.437669 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:26:49.438392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:26:49.440753 systemd-resolved[258]: Defaulting to hostname 'linux'. Mar 7 01:26:49.513143 kernel: SCSI subsystem initialized Mar 7 01:26:49.447861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:26:49.460764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:26:49.495229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:26:49.530218 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:26:49.537118 kernel: iscsi: registered transport (tcp) Mar 7 01:26:49.552945 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:26:49.552963 kernel: QLogic iSCSI HBA Driver Mar 7 01:26:49.586286 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:26:49.597294 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:26:49.624449 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:26:49.624515 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:26:49.629420 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:26:49.676115 kernel: raid6: neonx8 gen() 15807 MB/s Mar 7 01:26:49.695100 kernel: raid6: neonx4 gen() 15701 MB/s Mar 7 01:26:49.714098 kernel: raid6: neonx2 gen() 13226 MB/s Mar 7 01:26:49.734099 kernel: raid6: neonx1 gen() 10485 MB/s Mar 7 01:26:49.753098 kernel: raid6: int64x8 gen() 6988 MB/s Mar 7 01:26:49.772098 kernel: raid6: int64x4 gen() 7365 MB/s Mar 7 01:26:49.792098 kernel: raid6: int64x2 gen() 6146 MB/s Mar 7 01:26:49.813828 kernel: raid6: int64x1 gen() 5072 MB/s Mar 7 01:26:49.813839 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Mar 7 01:26:49.835617 kernel: raid6: .... xor() 12052 MB/s, rmw enabled Mar 7 01:26:49.835637 kernel: raid6: using neon recovery algorithm Mar 7 01:26:49.846303 kernel: xor: measuring software checksum speed Mar 7 01:26:49.846322 kernel: 8regs : 19778 MB/sec Mar 7 01:26:49.849115 kernel: 32regs : 19660 MB/sec Mar 7 01:26:49.854532 kernel: arm64_neon : 26322 MB/sec Mar 7 01:26:49.854546 kernel: xor: using function: arm64_neon (26322 MB/sec) Mar 7 01:26:49.904154 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:26:49.913187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:26:49.927216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:26:49.943288 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 7 01:26:49.947466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:26:49.963211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:26:49.983002 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Mar 7 01:26:50.010393 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:26:50.022257 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:26:50.059772 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:26:50.076289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:26:50.103690 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:26:50.115626 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:26:50.127607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:26:50.134142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:26:50.152341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:26:50.169897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:26:50.182754 kernel: hv_vmbus: Vmbus version:5.3 Mar 7 01:26:50.182777 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:26:50.170021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:50.217287 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 01:26:50.217313 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:26:50.217456 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:26:50.217467 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:26:50.217294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:50.247044 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:26:50.247066 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:26:50.247077 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 01:26:50.232301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:50.232468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.274537 kernel: PTP clock support registered Mar 7 01:26:50.252483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.286399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.305364 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:26:50.305395 kernel: scsi host0: storvsc_host_t Mar 7 01:26:50.305884 kernel: scsi host1: storvsc_host_t Mar 7 01:26:50.297951 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:26:50.316110 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:26:50.322105 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:26:50.322989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:50.327352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.344474 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:26:50.344497 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:26:50.351676 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: VF slot 1 added Mar 7 01:26:50.354493 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:26:50.360107 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:26:50.360139 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:26:49.939807 systemd-resolved[258]: Clock change detected. Flushing caches. Mar 7 01:26:49.959279 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:26:49.959298 systemd-journald[217]: Time jumped backwards, rotating. Mar 7 01:26:49.959334 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:26:49.959471 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:26:49.959480 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:26:49.959569 kernel: hv_pci 46b41545-3c29-4998-b17c-323fd65705fa: PCI VMBus probing: Using version 0x10004 Mar 7 01:26:49.959664 kernel: hv_pci 46b41545-3c29-4998-b17c-323fd65705fa: PCI host bridge to bus 3c29:00 Mar 7 01:26:49.959744 kernel: pci_bus 3c29:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 7 01:26:49.959838 kernel: pci_bus 3c29:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:26:49.959916 kernel: pci 3c29:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 7 01:26:49.944137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:26:50.006814 kernel: pci 3c29:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:26:50.006851 kernel: pci 3c29:00:02.0: enabling Extended Tags Mar 7 01:26:50.021952 kernel: pci 3c29:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3c29:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 7 01:26:50.022239 kernel: pci_bus 3c29:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:26:50.026141 kernel: pci 3c29:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 7 01:26:50.027029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:26:50.027187 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:26:50.027305 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:26:50.027394 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:26:50.027526 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:26:50.027626 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:26:50.045837 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:26:50.045880 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:26:50.077010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:26:50.082299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:50.102228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:26:50.121319 kernel: mlx5_core 3c29:00:02.0: enabling device (0000 -> 0002) Mar 7 01:26:50.121508 kernel: mlx5_core 3c29:00:02.0: firmware version: 16.30.5026 Mar 7 01:26:50.135758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:50.303316 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: VF registering: eth1 Mar 7 01:26:50.303554 kernel: mlx5_core 3c29:00:02.0 eth1: joined to eth0 Mar 7 01:26:50.310025 kernel: mlx5_core 3c29:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 7 01:26:50.320015 kernel: mlx5_core 3c29:00:02.0 enP15401s1: renamed from eth1 Mar 7 01:26:51.520221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:26:51.540003 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (493) Mar 7 01:26:51.554432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:26:51.559827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:26:51.586271 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:26:51.631015 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Mar 7 01:26:51.647845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:26:51.671748 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:26:52.614006 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:26:52.614061 disk-uuid[593]: The operation has completed successfully. Mar 7 01:26:52.674337 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:26:52.678155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:26:52.716104 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:26:52.726155 sh[719]: Success Mar 7 01:26:52.753021 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 01:26:53.002884 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:26:53.016432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:26:53.021255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:26:53.051891 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 01:26:53.051940 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:53.057411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:26:53.061287 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:26:53.064516 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:26:53.363022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:26:53.366965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:26:53.385214 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:26:53.394283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:26:53.422891 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:53.422947 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:53.422958 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:53.460013 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:53.468353 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:26:53.480006 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:53.487336 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:26:53.496442 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:26:53.515501 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:26:53.526018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:26:53.561762 systemd-networkd[904]: lo: Link UP Mar 7 01:26:53.561771 systemd-networkd[904]: lo: Gained carrier Mar 7 01:26:53.563276 systemd-networkd[904]: Enumeration completed Mar 7 01:26:53.565792 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:26:53.566544 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:26:53.566547 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:26:53.570563 systemd[1]: Reached target network.target - Network. Mar 7 01:26:53.645008 kernel: mlx5_core 3c29:00:02.0 enP15401s1: Link up Mar 7 01:26:53.682638 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: Data path switched to VF: enP15401s1 Mar 7 01:26:53.682337 systemd-networkd[904]: enP15401s1: Link UP Mar 7 01:26:53.682409 systemd-networkd[904]: eth0: Link UP Mar 7 01:26:53.682530 systemd-networkd[904]: eth0: Gained carrier Mar 7 01:26:53.682538 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:26:53.692216 systemd-networkd[904]: enP15401s1: Gained carrier Mar 7 01:26:53.711042 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:26:54.468984 ignition[902]: Ignition 2.19.0 Mar 7 01:26:54.469005 ignition[902]: Stage: fetch-offline Mar 7 01:26:54.471360 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:26:54.469045 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.469053 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.469147 ignition[902]: parsed url from cmdline: "" Mar 7 01:26:54.469150 ignition[902]: no config URL provided Mar 7 01:26:54.469154 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:26:54.493195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:26:54.469160 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:26:54.469164 ignition[902]: failed to fetch config: resource requires networking Mar 7 01:26:54.469322 ignition[902]: Ignition finished successfully Mar 7 01:26:54.510999 ignition[914]: Ignition 2.19.0 Mar 7 01:26:54.511005 ignition[914]: Stage: fetch Mar 7 01:26:54.511168 ignition[914]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.511177 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.511265 ignition[914]: parsed url from cmdline: "" Mar 7 01:26:54.511268 ignition[914]: no config URL provided Mar 7 01:26:54.511272 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:26:54.511278 ignition[914]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:26:54.511298 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:26:54.632370 ignition[914]: GET result: OK Mar 7 01:26:54.632464 ignition[914]: config has been read from IMDS userdata Mar 7 01:26:54.632534 ignition[914]: parsing config with SHA512: 7199f285620b7fcdefca0200db3bfa381ab454bb325e1c416ff1c4676b810a7acea8cf8c3527998d3989f6e8cc7ba2cb88d3f76d4b51f797a818635ca0e96c91 Mar 7 01:26:54.640702 unknown[914]: fetched base config from "system" Mar 7 01:26:54.641768 unknown[914]: fetched base config from "system" Mar 7 01:26:54.642172 ignition[914]: fetch: fetch complete Mar 7 01:26:54.641782 unknown[914]: fetched user config from "azure" Mar 7 01:26:54.642176 ignition[914]: fetch: fetch passed Mar 7 01:26:54.644194 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:26:54.642223 ignition[914]: Ignition finished successfully Mar 7 01:26:54.665186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:26:54.685322 ignition[920]: Ignition 2.19.0 Mar 7 01:26:54.685332 ignition[920]: Stage: kargs Mar 7 01:26:54.690419 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:26:54.685538 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.685547 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.689262 ignition[920]: kargs: kargs passed Mar 7 01:26:54.708159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:26:54.689313 ignition[920]: Ignition finished successfully Mar 7 01:26:54.726527 ignition[926]: Ignition 2.19.0 Mar 7 01:26:54.726535 ignition[926]: Stage: disks Mar 7 01:26:54.730421 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:26:54.726689 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:54.735403 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:26:54.726697 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:54.742334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:26:54.727598 ignition[926]: disks: disks passed Mar 7 01:26:54.751746 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:26:54.727640 ignition[926]: Ignition finished successfully Mar 7 01:26:54.759829 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:26:54.768877 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:26:54.790235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:26:54.864824 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:26:54.873945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:26:54.888205 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:26:54.942018 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 01:26:54.942420 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:26:54.946446 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:26:54.967106 systemd-networkd[904]: eth0: Gained IPv6LL Mar 7 01:26:54.994054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:26:55.013004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 7 01:26:55.025002 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:55.025049 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:55.025060 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:55.029160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:26:55.038133 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:26:55.046924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:26:55.066565 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:55.046958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:26:55.069473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:26:55.075521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:26:55.086166 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:26:55.674042 coreos-metadata[960]: Mar 07 01:26:55.674 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:26:55.680762 coreos-metadata[960]: Mar 07 01:26:55.680 INFO Fetch successful Mar 7 01:26:55.680762 coreos-metadata[960]: Mar 07 01:26:55.680 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:26:55.693186 coreos-metadata[960]: Mar 07 01:26:55.691 INFO Fetch successful Mar 7 01:26:55.709077 coreos-metadata[960]: Mar 07 01:26:55.709 INFO wrote hostname ci-4081.3.6-n-24b0a814a4 to /sysroot/etc/hostname Mar 7 01:26:55.716795 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:26:55.916515 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:26:55.967125 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:26:55.995363 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:26:56.002385 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:26:57.111405 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:26:57.123155 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:26:57.128927 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:26:57.149452 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:57.145076 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:26:57.173034 ignition[1062]: INFO : Ignition 2.19.0 Mar 7 01:26:57.176816 ignition[1062]: INFO : Stage: mount Mar 7 01:26:57.176816 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:57.176816 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:57.191330 ignition[1062]: INFO : mount: mount passed Mar 7 01:26:57.191330 ignition[1062]: INFO : Ignition finished successfully Mar 7 01:26:57.186838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:26:57.195569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:26:57.215329 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:26:57.226937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:26:57.250209 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Mar 7 01:26:57.260722 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 01:26:57.260746 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 01:26:57.264312 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:26:57.270999 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:26:57.273232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:26:57.300945 ignition[1091]: INFO : Ignition 2.19.0 Mar 7 01:26:57.300945 ignition[1091]: INFO : Stage: files Mar 7 01:26:57.307110 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:57.307110 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:57.307110 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:26:57.320971 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:26:57.320971 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:26:57.393403 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:26:57.399250 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 01:26:57.394394 unknown[1091]: wrote ssh authorized keys file for user: core Mar 7 01:26:57.443718 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:26:57.632633 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 01:26:57.632633 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:57.648751 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 01:26:58.054231 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:26:58.512786 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 01:26:58.512786 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:26:58.557376 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:26:58.566154 ignition[1091]: INFO : files: files passed Mar 7 01:26:58.566154 ignition[1091]: INFO : Ignition finished successfully Mar 7 01:26:58.566840 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:26:58.597740 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:26:58.609158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:26:58.636402 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:26:58.636502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:26:58.668937 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.668937 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.683012 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:26:58.676783 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:26:58.688522 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:26:58.711228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:26:58.736468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:26:58.736592 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:26:58.746184 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:26:58.755386 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:26:58.763978 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:26:58.777259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:26:58.796470 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:26:58.808326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:26:58.823668 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:26:58.828976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:26:58.838575 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:26:58.847226 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:26:58.847384 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:26:58.859474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:26:58.868650 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:26:58.876372 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:26:58.884789 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:26:58.894340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:26:58.903547 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:26:58.912448 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:26:58.922316 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:26:58.931730 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:26:58.940071 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:26:58.947337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:26:58.947496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:26:58.959066 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:26:58.967971 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:26:58.977502 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:26:58.985906 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:26:58.991459 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:26:58.991614 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:26:59.005673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:26:59.005821 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:26:59.015265 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:26:59.015413 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:26:59.023937 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:26:59.024095 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:26:59.052621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:26:59.062263 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:26:59.069792 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:26:59.089660 ignition[1143]: INFO : Ignition 2.19.0 Mar 7 01:26:59.089660 ignition[1143]: INFO : Stage: umount Mar 7 01:26:59.089660 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:26:59.089660 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:26:59.089660 ignition[1143]: INFO : umount: umount passed Mar 7 01:26:59.089660 ignition[1143]: INFO : Ignition finished successfully Mar 7 01:26:59.070041 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:26:59.085120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:26:59.085279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:26:59.100219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:26:59.101030 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:26:59.103014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:26:59.110439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:26:59.112021 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:26:59.125467 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:26:59.125543 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:26:59.137533 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:26:59.137588 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:26:59.142035 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:26:59.142071 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:26:59.146243 systemd[1]: Stopped target network.target - Network. Mar 7 01:26:59.154895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:26:59.154967 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:26:59.164958 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:26:59.172730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:26:59.176507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:26:59.182116 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:26:59.190081 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:26:59.198650 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:26:59.198697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:26:59.207781 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:26:59.207813 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:26:59.217899 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:26:59.217945 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:26:59.225789 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:26:59.225823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:26:59.234664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:26:59.243504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:26:59.252023 systemd-networkd[904]: eth0: DHCPv6 lease lost Mar 7 01:26:59.256405 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:26:59.256584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:26:59.266456 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:26:59.266548 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:26:59.276717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:26:59.276776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:26:59.450765 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: Data path switched from VF: enP15401s1 Mar 7 01:26:59.303224 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:26:59.309961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:26:59.310037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:26:59.318687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:26:59.318726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:26:59.327008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:26:59.327051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:26:59.335689 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:26:59.335730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:26:59.344521 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:26:59.379630 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:26:59.379809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:26:59.390205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:26:59.390248 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:26:59.401798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:26:59.401838 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:26:59.410614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:26:59.410657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:26:59.423481 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:26:59.423520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:26:59.438498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:26:59.438574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:26:59.470183 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:26:59.481242 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:26:59.481312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:26:59.496432 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:26:59.496488 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:26:59.501872 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:26:59.501908 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:26:59.511487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:26:59.511522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:26:59.522124 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:26:59.524011 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:26:59.531288 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:26:59.531365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:26:59.544499 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:26:59.673911 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 7 01:26:59.544584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:26:59.554531 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:26:59.562685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:26:59.562759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:26:59.584200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:26:59.598021 systemd[1]: Switching root. Mar 7 01:26:59.700942 systemd-journald[217]: Journal stopped Mar 7 01:27:04.845753 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:27:04.845775 kernel: SELinux: policy capability open_perms=1 Mar 7 01:27:04.845785 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:27:04.845793 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:27:04.845803 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:27:04.845810 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:27:04.845819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:27:04.845827 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:27:04.845836 systemd[1]: Successfully loaded SELinux policy in 578.233ms. Mar 7 01:27:04.845845 kernel: audit: type=1403 audit(1772846821.498:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:27:04.845856 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.184ms. Mar 7 01:27:04.845866 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:27:04.845874 systemd[1]: Detected virtualization microsoft. Mar 7 01:27:04.845883 systemd[1]: Detected architecture arm64. Mar 7 01:27:04.845892 systemd[1]: Detected first boot. Mar 7 01:27:04.845904 systemd[1]: Hostname set to . Mar 7 01:27:04.845913 systemd[1]: Initializing machine ID from random generator. Mar 7 01:27:04.845922 zram_generator::config[1185]: No configuration found. Mar 7 01:27:04.845932 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:27:04.845941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:27:04.845950 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:27:04.845959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:27:04.845970 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:27:04.845979 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:27:04.845996 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:27:04.846006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:27:04.846016 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:27:04.846025 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:27:04.846034 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:27:04.846045 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:27:04.846055 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:27:04.846064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:27:04.846073 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:27:04.846083 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:27:04.846092 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:27:04.846101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:27:04.846111 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 01:27:04.846123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:27:04.846132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:27:04.846141 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:27:04.846152 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:27:04.846162 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:27:04.846171 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:27:04.846181 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:27:04.846190 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:27:04.846201 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:27:04.846210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:27:04.846220 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:27:04.846229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:27:04.846238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:27:04.846248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:27:04.846259 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:27:04.846268 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:27:04.846278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:27:04.846287 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:27:04.846297 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:27:04.846306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:27:04.846317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:27:04.846328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:27:04.846338 systemd[1]: Reached target machines.target - Containers. Mar 7 01:27:04.846348 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:27:04.846357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:27:04.846367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:27:04.846377 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:27:04.846386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:27:04.846396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:27:04.846407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:27:04.846416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:27:04.846426 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:27:04.846436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:27:04.846445 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:27:04.846455 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:27:04.846464 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:27:04.846474 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:27:04.846484 kernel: loop: module loaded Mar 7 01:27:04.846493 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:27:04.846503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:27:04.846512 kernel: ACPI: bus type drm_connector registered Mar 7 01:27:04.846522 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:27:04.846532 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:27:04.846541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:27:04.846564 systemd-journald[1283]: Collecting audit messages is disabled. Mar 7 01:27:04.846585 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:27:04.846595 systemd-journald[1283]: Journal started Mar 7 01:27:04.846615 systemd-journald[1283]: Runtime Journal (/run/log/journal/71606d0e6a124082a86b173c8db864fa) is 8.0M, max 78.5M, 70.5M free. Mar 7 01:27:03.951330 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:27:04.118476 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:27:04.118818 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:27:04.119151 systemd[1]: systemd-journald.service: Consumed 2.451s CPU time. Mar 7 01:27:04.854090 systemd[1]: Stopped verity-setup.service. Mar 7 01:27:04.870305 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:27:04.870388 kernel: fuse: init (API version 7.39) Mar 7 01:27:04.871201 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:27:04.875900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:27:04.880678 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:27:04.885210 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:27:04.889931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:27:04.894812 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:27:04.901062 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:27:04.906492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:27:04.913043 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:27:04.913252 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:27:04.918837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:27:04.919172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:27:04.924474 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:27:04.924669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:27:04.929444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:27:04.929643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:27:04.935246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:27:04.935439 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:27:04.940341 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:27:04.940540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:27:04.945550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:27:04.950698 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:27:04.956524 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:27:04.962346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:27:04.975185 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:27:04.990074 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:27:04.996258 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:27:05.001334 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:27:05.001437 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:27:05.007220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:27:05.024125 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:27:05.030678 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:27:05.035200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:27:05.126501 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:27:05.132438 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:27:05.137641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:27:05.138617 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:27:05.143469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:27:05.144668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:27:05.153290 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:27:05.162183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:27:05.179200 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:27:05.198481 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:27:05.206148 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:27:05.211766 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:27:05.218537 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:27:05.229073 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:27:05.237007 systemd-journald[1283]: Time spent on flushing to /var/log/journal/71606d0e6a124082a86b173c8db864fa is 70.011ms for 898 entries. Mar 7 01:27:05.237007 systemd-journald[1283]: System Journal (/var/log/journal/71606d0e6a124082a86b173c8db864fa) is 11.8M, max 2.6G, 2.6G free. Mar 7 01:27:05.360137 systemd-journald[1283]: Received client request to flush runtime journal. Mar 7 01:27:05.360184 kernel: loop0: detected capacity change from 0 to 31320 Mar 7 01:27:05.360199 systemd-journald[1283]: /var/log/journal/71606d0e6a124082a86b173c8db864fa/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 7 01:27:05.360232 systemd-journald[1283]: Rotating system journal. Mar 7 01:27:05.250287 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:27:05.261361 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:27:05.284510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:27:05.329066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:27:05.329739 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:27:05.335678 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Mar 7 01:27:05.335688 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Mar 7 01:27:05.341826 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:27:05.354249 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:27:05.365077 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:27:05.614841 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:27:05.627139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:27:05.645761 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 7 01:27:05.645780 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 7 01:27:05.649555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:27:05.684006 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:27:05.760022 kernel: loop1: detected capacity change from 0 to 114432 Mar 7 01:27:05.862837 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:27:05.873114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:27:05.898401 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Mar 7 01:27:06.014312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:27:06.027618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:27:06.071451 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 7 01:27:06.088880 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:27:06.138008 kernel: loop2: detected capacity change from 0 to 209336 Mar 7 01:27:06.156012 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:27:06.168580 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:27:06.181012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:27:06.220034 kernel: hv_vmbus: registering driver hv_balloon Mar 7 01:27:06.223829 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 01:27:06.231906 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 01:27:06.231962 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 7 01:27:06.231975 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 01:27:06.248844 kernel: loop3: detected capacity change from 0 to 114328 Mar 7 01:27:06.248913 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 01:27:06.254540 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:27:06.251468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:27:06.267787 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:27:06.282422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:27:06.282590 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:27:06.285732 systemd-networkd[1356]: lo: Link UP Mar 7 01:27:06.285736 systemd-networkd[1356]: lo: Gained carrier Mar 7 01:27:06.287481 systemd-networkd[1356]: Enumeration completed Mar 7 01:27:06.287791 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:27:06.287794 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:27:06.289102 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:27:06.305335 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:27:06.311606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:27:06.334013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1351) Mar 7 01:27:06.355014 kernel: mlx5_core 3c29:00:02.0 enP15401s1: Link up Mar 7 01:27:06.381619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:27:06.389673 kernel: hv_netvsc 7ced8dc7-49c5-7ced-8dc7-49c57ced8dc7 eth0: Data path switched to VF: enP15401s1 Mar 7 01:27:06.389079 systemd-networkd[1356]: enP15401s1: Link UP Mar 7 01:27:06.389163 systemd-networkd[1356]: eth0: Link UP Mar 7 01:27:06.389166 systemd-networkd[1356]: eth0: Gained carrier Mar 7 01:27:06.389180 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:27:06.391137 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:27:06.396247 systemd-networkd[1356]: enP15401s1: Gained carrier Mar 7 01:27:06.402055 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:27:06.449432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:27:06.592949 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:27:06.602233 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:27:06.624012 kernel: loop4: detected capacity change from 0 to 31320 Mar 7 01:27:06.636023 kernel: loop5: detected capacity change from 0 to 114432 Mar 7 01:27:06.649015 kernel: loop6: detected capacity change from 0 to 209336 Mar 7 01:27:06.666046 kernel: loop7: detected capacity change from 0 to 114328 Mar 7 01:27:06.675124 (sd-merge)[1445]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 01:27:06.675516 (sd-merge)[1445]: Merged extensions into '/usr'. Mar 7 01:27:06.686103 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:27:06.686117 systemd[1]: Reloading... Mar 7 01:27:06.687105 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:27:06.752064 zram_generator::config[1487]: No configuration found. Mar 7 01:27:06.858755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:27:06.929363 systemd[1]: Reloading finished in 242 ms. Mar 7 01:27:06.955024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:27:06.960867 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:27:06.967178 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:27:06.975839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:27:06.987113 systemd[1]: Starting ensure-sysext.service... Mar 7 01:27:06.992140 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:27:06.999214 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:27:07.004044 lvm[1535]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:27:07.011037 systemd[1]: Reloading requested from client PID 1534 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:27:07.011051 systemd[1]: Reloading... Mar 7 01:27:07.029270 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:27:07.029797 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:27:07.030534 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:27:07.030845 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 7 01:27:07.030946 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 7 01:27:07.033709 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:27:07.033718 systemd-tmpfiles[1536]: Skipping /boot Mar 7 01:27:07.042651 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:27:07.042667 systemd-tmpfiles[1536]: Skipping /boot Mar 7 01:27:07.072073 zram_generator::config[1563]: No configuration found. Mar 7 01:27:07.192705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:27:07.266928 systemd[1]: Reloading finished in 255 ms. Mar 7 01:27:07.282935 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:27:07.293430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:27:07.311278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:27:07.317500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:27:07.325346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:27:07.338100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:27:07.345446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:27:07.358692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:27:07.361261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:27:07.373262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:27:07.382245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:27:07.390588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:27:07.391618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:27:07.396256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:27:07.404194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:27:07.404346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:27:07.412566 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:27:07.412911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:27:07.418662 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:27:07.429461 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:27:07.438394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:27:07.439128 augenrules[1648]: No rules Mar 7 01:27:07.444400 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:27:07.450712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:27:07.458262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:27:07.462687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:27:07.463657 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:27:07.464343 systemd-resolved[1629]: Positive Trust Anchors: Mar 7 01:27:07.464549 systemd-resolved[1629]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:27:07.464586 systemd-resolved[1629]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:27:07.469069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:27:07.469205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:27:07.474539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:27:07.474660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:27:07.480594 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:27:07.480706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:27:07.490723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:27:07.504294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:27:07.510586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:27:07.511828 systemd-resolved[1629]: Using system hostname 'ci-4081.3.6-n-24b0a814a4'. Mar 7 01:27:07.517257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:27:07.525963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:27:07.530937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:27:07.531133 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:27:07.536218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:27:07.541348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:27:07.541502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:27:07.547163 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:27:07.547290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:27:07.552445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:27:07.552584 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:27:07.558826 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:27:07.558951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:27:07.566419 systemd[1]: Finished ensure-sysext.service. Mar 7 01:27:07.571678 systemd[1]: Reached target network.target - Network. Mar 7 01:27:07.575937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:27:07.581211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:27:07.581279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:27:07.830159 systemd-networkd[1356]: eth0: Gained IPv6LL Mar 7 01:27:07.832839 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:27:07.839542 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:27:08.251905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:27:08.257558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:27:10.325939 ldconfig[1315]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:27:10.349055 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:27:10.363183 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:27:10.370698 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:27:10.376104 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:27:10.380501 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:27:10.385736 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:27:10.391319 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:27:10.395663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:27:10.400883 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:27:10.406318 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:27:10.406346 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:27:10.410109 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:27:10.414793 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:27:10.420952 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:27:10.427914 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:27:10.432846 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:27:10.437448 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:27:10.441407 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:27:10.445436 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:27:10.445460 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:27:10.447824 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 01:27:10.454136 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:27:10.467125 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:27:10.480198 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:27:10.485537 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 7 01:27:10.486125 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:27:10.492159 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:27:10.494105 jq[1688]: false Mar 7 01:27:10.500294 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:27:10.500334 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 7 01:27:10.502261 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 01:27:10.510377 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 01:27:10.512547 KVP[1690]: KVP starting; pid is:1690 Mar 7 01:27:10.518932 chronyd[1694]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 7 01:27:10.520140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:10.526318 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:27:10.534191 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:27:10.542132 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:27:10.548838 extend-filesystems[1689]: Found loop4 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found loop5 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found loop6 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found loop7 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda1 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda2 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda3 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found usr Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda4 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda6 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda7 Mar 7 01:27:10.553280 extend-filesystems[1689]: Found sda9 Mar 7 01:27:10.553280 extend-filesystems[1689]: Checking size of /dev/sda9 Mar 7 01:27:10.650734 kernel: hv_utils: KVP IC version 4.0 Mar 7 01:27:10.560253 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:27:10.558544 KVP[1690]: KVP LIC Version: 3.1 Mar 7 01:27:10.650939 extend-filesystems[1689]: Old size kept for /dev/sda9 Mar 7 01:27:10.650939 extend-filesystems[1689]: Found sr0 Mar 7 01:27:10.585154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:27:10.573884 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Mar 7 01:27:10.607038 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:27:10.574181 chronyd[1694]: Loaded seccomp filter (level 2) Mar 7 01:27:10.619961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:27:10.606704 dbus-daemon[1687]: [system] SELinux support is enabled Mar 7 01:27:10.624370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:27:10.625287 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:27:10.650222 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:27:10.692216 jq[1720]: true Mar 7 01:27:10.662840 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:27:10.686842 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 01:27:10.706407 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:27:10.706584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:27:10.706845 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:27:10.706976 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:27:10.724537 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:27:10.724726 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:27:10.732641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:27:10.745457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:27:10.745661 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:27:10.769005 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1732) Mar 7 01:27:10.776722 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:27:10.776765 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:27:10.787372 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:27:10.787393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:27:10.787413 (ntainerd)[1739]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:27:10.793607 jq[1738]: true Mar 7 01:27:10.845093 tar[1735]: linux-arm64/LICENSE Mar 7 01:27:10.845093 tar[1735]: linux-arm64/helm Mar 7 01:27:10.879055 coreos-metadata[1684]: Mar 07 01:27:10.879 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:27:10.882398 coreos-metadata[1684]: Mar 07 01:27:10.882 INFO Fetch successful Mar 7 01:27:10.882398 coreos-metadata[1684]: Mar 07 01:27:10.882 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 01:27:10.888158 coreos-metadata[1684]: Mar 07 01:27:10.886 INFO Fetch successful Mar 7 01:27:10.888158 coreos-metadata[1684]: Mar 07 01:27:10.887 INFO Fetching http://168.63.129.16/machine/ee2f522d-0afe-44d2-a625-137a8cc9468c/7e07db41%2D6444%2D4faa%2Dac84%2Db13b68320b61.%5Fci%2D4081.3.6%2Dn%2D24b0a814a4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 01:27:10.891118 coreos-metadata[1684]: Mar 07 01:27:10.891 INFO Fetch successful Mar 7 01:27:10.891362 coreos-metadata[1684]: Mar 07 01:27:10.891 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:27:10.894432 systemd-logind[1711]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 7 01:27:10.898323 systemd-logind[1711]: New seat seat0. Mar 7 01:27:10.898978 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:27:10.905524 coreos-metadata[1684]: Mar 07 01:27:10.905 INFO Fetch successful Mar 7 01:27:10.942060 bash[1784]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:27:10.948667 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:27:10.957368 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:27:10.962994 update_engine[1717]: I20260307 01:27:10.962891 1717 main.cc:92] Flatcar Update Engine starting Mar 7 01:27:10.976522 update_engine[1717]: I20260307 01:27:10.971601 1717 update_check_scheduler.cc:74] Next update check in 10m21s Mar 7 01:27:10.971705 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:27:10.987033 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:27:10.998311 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:27:11.008455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:27:11.224090 locksmithd[1801]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:27:11.507127 tar[1735]: linux-arm64/README.md Mar 7 01:27:11.526485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:27:11.591510 containerd[1739]: time="2026-03-07T01:27:11.591407400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:27:11.637692 containerd[1739]: time="2026-03-07T01:27:11.637512120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639063880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639103880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639120440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639296400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639314360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639375760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639387840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639568240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639584320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639597680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640300 containerd[1739]: time="2026-03-07T01:27:11.639607440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.639680160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.639864080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.639976080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.640003480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.640087840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:27:11.640546 containerd[1739]: time="2026-03-07T01:27:11.640128760Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:27:11.662492 containerd[1739]: time="2026-03-07T01:27:11.662421520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:27:11.663007 containerd[1739]: time="2026-03-07T01:27:11.662633760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:27:11.663007 containerd[1739]: time="2026-03-07T01:27:11.662670000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:27:11.663007 containerd[1739]: time="2026-03-07T01:27:11.662749240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:27:11.663007 containerd[1739]: time="2026-03-07T01:27:11.662767680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:27:11.663007 containerd[1739]: time="2026-03-07T01:27:11.662934600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:27:11.663373 containerd[1739]: time="2026-03-07T01:27:11.663355480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:27:11.663527 containerd[1739]: time="2026-03-07T01:27:11.663511640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:27:11.663611 containerd[1739]: time="2026-03-07T01:27:11.663598280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:27:11.663663 containerd[1739]: time="2026-03-07T01:27:11.663652440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:27:11.663715 containerd[1739]: time="2026-03-07T01:27:11.663703960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.663773 containerd[1739]: time="2026-03-07T01:27:11.663761080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663812160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663830080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663844880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663857840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663869360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663880640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663904640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663917280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663928840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663941240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663952640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663964640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663975920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664196 containerd[1739]: time="2026-03-07T01:27:11.663999440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664013200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664026880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664038640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664051320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664063360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664104400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664126120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664137920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.664488 containerd[1739]: time="2026-03-07T01:27:11.664148680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:27:11.664700 containerd[1739]: time="2026-03-07T01:27:11.664650360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:27:11.664700 containerd[1739]: time="2026-03-07T01:27:11.664678320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664808600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664828240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664838480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664851040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664861720Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:27:11.665002 containerd[1739]: time="2026-03-07T01:27:11.664875840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:27:11.665352 containerd[1739]: time="2026-03-07T01:27:11.665297560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:27:11.665949 containerd[1739]: time="2026-03-07T01:27:11.665486440Z" level=info msg="Connect containerd service" Mar 7 01:27:11.665949 containerd[1739]: time="2026-03-07T01:27:11.665530440Z" level=info msg="using legacy CRI server" Mar 7 01:27:11.665949 containerd[1739]: time="2026-03-07T01:27:11.665537960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:27:11.665949 containerd[1739]: time="2026-03-07T01:27:11.665624360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:27:11.666476 containerd[1739]: time="2026-03-07T01:27:11.666453720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:27:11.666662 containerd[1739]: time="2026-03-07T01:27:11.666632480Z" level=info msg="Start subscribing containerd event" Mar 7 01:27:11.666735 containerd[1739]: time="2026-03-07T01:27:11.666724000Z" level=info msg="Start recovering state" Mar 7 01:27:11.666830 containerd[1739]: time="2026-03-07T01:27:11.666818280Z" level=info msg="Start event monitor" Mar 7 01:27:11.666878 containerd[1739]: time="2026-03-07T01:27:11.666867080Z" level=info msg="Start snapshots syncer" Mar 7 01:27:11.666935 containerd[1739]: time="2026-03-07T01:27:11.666923760Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:27:11.666982 containerd[1739]: time="2026-03-07T01:27:11.666972000Z" level=info msg="Start streaming server" Mar 7 01:27:11.667404 containerd[1739]: time="2026-03-07T01:27:11.667386480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:27:11.667529 containerd[1739]: time="2026-03-07T01:27:11.667506280Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:27:11.668581 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:27:11.676225 containerd[1739]: time="2026-03-07T01:27:11.674344560Z" level=info msg="containerd successfully booted in 0.087409s" Mar 7 01:27:11.760143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:11.767314 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:27:11.862004 sshd_keygen[1714]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:27:11.880246 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:27:11.893406 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:27:11.908173 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 01:27:11.913020 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:27:11.914045 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:27:11.934160 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:27:11.951736 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 01:27:11.957028 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:27:11.966184 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:27:11.978363 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 01:27:11.983769 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:27:11.989195 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:27:11.993945 systemd[1]: Startup finished in 595ms (kernel) + 12.623s (initrd) + 11.072s (userspace) = 24.290s. Mar 7 01:27:12.263973 kubelet[1822]: E0307 01:27:12.263924 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:27:12.269281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:27:12.269514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:27:12.300021 login[1850]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 7 01:27:12.300532 login[1849]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:12.310501 systemd-logind[1711]: New session 2 of user core. Mar 7 01:27:12.312459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:27:12.320279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:27:12.347805 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:27:12.352269 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:27:12.373555 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:27:12.541184 systemd[1859]: Queued start job for default target default.target. Mar 7 01:27:12.547042 systemd[1859]: Created slice app.slice - User Application Slice. Mar 7 01:27:12.547071 systemd[1859]: Reached target paths.target - Paths. Mar 7 01:27:12.547083 systemd[1859]: Reached target timers.target - Timers. Mar 7 01:27:12.548234 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:27:12.557727 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:27:12.557786 systemd[1859]: Reached target sockets.target - Sockets. Mar 7 01:27:12.557798 systemd[1859]: Reached target basic.target - Basic System. Mar 7 01:27:12.557834 systemd[1859]: Reached target default.target - Main User Target. Mar 7 01:27:12.557859 systemd[1859]: Startup finished in 176ms. Mar 7 01:27:12.557925 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:27:12.561314 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:27:13.301479 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:13.305513 systemd-logind[1711]: New session 1 of user core. Mar 7 01:27:13.315108 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:27:13.503110 waagent[1847]: 2026-03-07T01:27:13.498904Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 7 01:27:13.503592 waagent[1847]: 2026-03-07T01:27:13.503536Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 7 01:27:13.507045 waagent[1847]: 2026-03-07T01:27:13.506992Z INFO Daemon Daemon Python: 3.11.9 Mar 7 01:27:13.510438 waagent[1847]: 2026-03-07T01:27:13.510384Z INFO Daemon Daemon Run daemon Mar 7 01:27:13.513653 waagent[1847]: 2026-03-07T01:27:13.513609Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 7 01:27:13.520269 waagent[1847]: 2026-03-07T01:27:13.520210Z INFO Daemon Daemon Using waagent for provisioning Mar 7 01:27:13.524355 waagent[1847]: 2026-03-07T01:27:13.524313Z INFO Daemon Daemon Activate resource disk Mar 7 01:27:13.528059 waagent[1847]: 2026-03-07T01:27:13.528012Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 01:27:13.537519 waagent[1847]: 2026-03-07T01:27:13.537458Z INFO Daemon Daemon Found device: None Mar 7 01:27:13.541106 waagent[1847]: 2026-03-07T01:27:13.541057Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 01:27:13.547736 waagent[1847]: 2026-03-07T01:27:13.547682Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 01:27:13.558265 waagent[1847]: 2026-03-07T01:27:13.558172Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:27:13.562959 waagent[1847]: 2026-03-07T01:27:13.562905Z INFO Daemon Daemon Running default provisioning handler Mar 7 01:27:13.573472 waagent[1847]: 2026-03-07T01:27:13.573405Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 01:27:13.584843 waagent[1847]: 2026-03-07T01:27:13.584783Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 01:27:13.592740 waagent[1847]: 2026-03-07T01:27:13.592674Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 01:27:13.596788 waagent[1847]: 2026-03-07T01:27:13.596729Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 01:27:13.711217 waagent[1847]: 2026-03-07T01:27:13.711134Z INFO Daemon Daemon Successfully mounted dvd Mar 7 01:27:13.738968 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 01:27:13.741041 waagent[1847]: 2026-03-07T01:27:13.740774Z INFO Daemon Daemon Detect protocol endpoint Mar 7 01:27:13.744591 waagent[1847]: 2026-03-07T01:27:13.744540Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:27:13.748749 waagent[1847]: 2026-03-07T01:27:13.748715Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 01:27:13.753979 waagent[1847]: 2026-03-07T01:27:13.753944Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 01:27:13.758043 waagent[1847]: 2026-03-07T01:27:13.758007Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 01:27:13.761855 waagent[1847]: 2026-03-07T01:27:13.761822Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 01:27:13.792337 waagent[1847]: 2026-03-07T01:27:13.792296Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 01:27:13.797848 waagent[1847]: 2026-03-07T01:27:13.797823Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 01:27:13.802048 waagent[1847]: 2026-03-07T01:27:13.802013Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 01:27:14.005401 waagent[1847]: 2026-03-07T01:27:14.005256Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 01:27:14.010404 waagent[1847]: 2026-03-07T01:27:14.010354Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 01:27:14.017685 waagent[1847]: 2026-03-07T01:27:14.017640Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:27:14.035529 waagent[1847]: 2026-03-07T01:27:14.035491Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 01:27:14.040313 waagent[1847]: 2026-03-07T01:27:14.040268Z INFO Daemon Mar 7 01:27:14.042650 waagent[1847]: 2026-03-07T01:27:14.042615Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d81bf8c8-c67a-483c-8143-b5727b3730b7 eTag: 5424615754872682528 source: Fabric] Mar 7 01:27:14.051757 waagent[1847]: 2026-03-07T01:27:14.051717Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 01:27:14.057268 waagent[1847]: 2026-03-07T01:27:14.057229Z INFO Daemon Mar 7 01:27:14.059344 waagent[1847]: 2026-03-07T01:27:14.059305Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:27:14.068168 waagent[1847]: 2026-03-07T01:27:14.068136Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 01:27:14.217267 waagent[1847]: 2026-03-07T01:27:14.217189Z INFO Daemon Downloaded certificate {'thumbprint': 'CFADC0943B4218B46AEF06C05135800ACAF8B65C', 'hasPrivateKey': True} Mar 7 01:27:14.225480 waagent[1847]: 2026-03-07T01:27:14.225434Z INFO Daemon Fetch goal state completed Mar 7 01:27:14.264745 waagent[1847]: 2026-03-07T01:27:14.264661Z INFO Daemon Daemon Starting provisioning Mar 7 01:27:14.268721 waagent[1847]: 2026-03-07T01:27:14.268668Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 01:27:14.272511 waagent[1847]: 2026-03-07T01:27:14.272473Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-24b0a814a4] Mar 7 01:27:14.294018 waagent[1847]: 2026-03-07T01:27:14.293420Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-24b0a814a4] Mar 7 01:27:14.298621 waagent[1847]: 2026-03-07T01:27:14.298567Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 01:27:14.303812 waagent[1847]: 2026-03-07T01:27:14.303767Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 01:27:14.330729 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:27:14.330741 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:27:14.330784 systemd-networkd[1356]: eth0: DHCP lease lost Mar 7 01:27:14.331774 waagent[1847]: 2026-03-07T01:27:14.331698Z INFO Daemon Daemon Create user account if not exists Mar 7 01:27:14.336081 waagent[1847]: 2026-03-07T01:27:14.336013Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 01:27:14.340471 waagent[1847]: 2026-03-07T01:27:14.340425Z INFO Daemon Daemon Configure sudoer Mar 7 01:27:14.342034 systemd-networkd[1356]: eth0: DHCPv6 lease lost Mar 7 01:27:14.344086 waagent[1847]: 2026-03-07T01:27:14.344032Z INFO Daemon Daemon Configure sshd Mar 7 01:27:14.347407 waagent[1847]: 2026-03-07T01:27:14.347355Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 01:27:14.357465 waagent[1847]: 2026-03-07T01:27:14.357412Z INFO Daemon Daemon Deploy ssh public key. Mar 7 01:27:14.369038 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 7 01:27:15.475494 waagent[1847]: 2026-03-07T01:27:15.475445Z INFO Daemon Daemon Provisioning complete Mar 7 01:27:15.492043 waagent[1847]: 2026-03-07T01:27:15.491979Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 01:27:15.496878 waagent[1847]: 2026-03-07T01:27:15.496832Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 01:27:15.504555 waagent[1847]: 2026-03-07T01:27:15.504514Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 7 01:27:15.631511 waagent[1908]: 2026-03-07T01:27:15.630859Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 7 01:27:15.631511 waagent[1908]: 2026-03-07T01:27:15.631019Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 7 01:27:15.631511 waagent[1908]: 2026-03-07T01:27:15.631081Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 7 01:27:15.666951 waagent[1908]: 2026-03-07T01:27:15.666874Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 7 01:27:15.667295 waagent[1908]: 2026-03-07T01:27:15.667256Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:27:15.667437 waagent[1908]: 2026-03-07T01:27:15.667404Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:27:15.675310 waagent[1908]: 2026-03-07T01:27:15.675248Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:27:15.680930 waagent[1908]: 2026-03-07T01:27:15.680882Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 01:27:15.681552 waagent[1908]: 2026-03-07T01:27:15.681510Z INFO ExtHandler Mar 7 01:27:15.681707 waagent[1908]: 2026-03-07T01:27:15.681674Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 31dab008-847b-4122-b96b-47e1eb089e5c eTag: 5424615754872682528 source: Fabric] Mar 7 01:27:15.682142 waagent[1908]: 2026-03-07T01:27:15.682102Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 01:27:15.682802 waagent[1908]: 2026-03-07T01:27:15.682758Z INFO ExtHandler Mar 7 01:27:15.683543 waagent[1908]: 2026-03-07T01:27:15.682906Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:27:15.688005 waagent[1908]: 2026-03-07T01:27:15.686723Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 01:27:15.754118 waagent[1908]: 2026-03-07T01:27:15.753984Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFADC0943B4218B46AEF06C05135800ACAF8B65C', 'hasPrivateKey': True} Mar 7 01:27:15.754750 waagent[1908]: 2026-03-07T01:27:15.754709Z INFO ExtHandler Fetch goal state completed Mar 7 01:27:15.769865 waagent[1908]: 2026-03-07T01:27:15.769802Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1908 Mar 7 01:27:15.770146 waagent[1908]: 2026-03-07T01:27:15.770111Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 01:27:15.771800 waagent[1908]: 2026-03-07T01:27:15.771758Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 01:27:15.772257 waagent[1908]: 2026-03-07T01:27:15.772219Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 01:27:15.810107 waagent[1908]: 2026-03-07T01:27:15.810067Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 01:27:15.810422 waagent[1908]: 2026-03-07T01:27:15.810384Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 01:27:15.816287 waagent[1908]: 2026-03-07T01:27:15.816255Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 01:27:15.822524 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit waagent.service)... Mar 7 01:27:15.822742 systemd[1]: Reloading... Mar 7 01:27:15.910031 zram_generator::config[1958]: No configuration found. Mar 7 01:27:16.013393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:27:16.092865 systemd[1]: Reloading finished in 269 ms. Mar 7 01:27:16.115578 waagent[1908]: 2026-03-07T01:27:16.115225Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 7 01:27:16.122632 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Mar 7 01:27:16.122646 systemd[1]: Reloading... Mar 7 01:27:16.195017 zram_generator::config[2043]: No configuration found. Mar 7 01:27:16.297997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:27:16.372774 systemd[1]: Reloading finished in 249 ms. Mar 7 01:27:16.395078 waagent[1908]: 2026-03-07T01:27:16.394453Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 01:27:16.395078 waagent[1908]: 2026-03-07T01:27:16.394615Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 01:27:17.547186 waagent[1908]: 2026-03-07T01:27:17.545972Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 01:27:17.547186 waagent[1908]: 2026-03-07T01:27:17.546579Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 7 01:27:17.547501 waagent[1908]: 2026-03-07T01:27:17.547392Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:27:17.547501 waagent[1908]: 2026-03-07T01:27:17.547480Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:27:17.547622 waagent[1908]: 2026-03-07T01:27:17.547566Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 01:27:17.547803 waagent[1908]: 2026-03-07T01:27:17.547759Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 01:27:17.548263 waagent[1908]: 2026-03-07T01:27:17.548208Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 01:27:17.548391 waagent[1908]: 2026-03-07T01:27:17.548345Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 01:27:17.548391 waagent[1908]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 01:27:17.548391 waagent[1908]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 01:27:17.548391 waagent[1908]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 01:27:17.548391 waagent[1908]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:27:17.548391 waagent[1908]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:27:17.548391 waagent[1908]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:27:17.548531 waagent[1908]: 2026-03-07T01:27:17.548460Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:27:17.548550 waagent[1908]: 2026-03-07T01:27:17.548530Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:27:17.548886 waagent[1908]: 2026-03-07T01:27:17.548708Z INFO EnvHandler ExtHandler Configure routes Mar 7 01:27:17.549422 waagent[1908]: 2026-03-07T01:27:17.549250Z INFO EnvHandler ExtHandler Gateway:None Mar 7 01:27:17.549422 waagent[1908]: 2026-03-07T01:27:17.549334Z INFO EnvHandler ExtHandler Routes:None Mar 7 01:27:17.549754 waagent[1908]: 2026-03-07T01:27:17.549715Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 01:27:17.549855 waagent[1908]: 2026-03-07T01:27:17.549807Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 01:27:17.550271 waagent[1908]: 2026-03-07T01:27:17.550214Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 01:27:17.550380 waagent[1908]: 2026-03-07T01:27:17.550337Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 01:27:17.550513 waagent[1908]: 2026-03-07T01:27:17.550476Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 01:27:17.556353 waagent[1908]: 2026-03-07T01:27:17.556306Z INFO ExtHandler ExtHandler Mar 7 01:27:17.556699 waagent[1908]: 2026-03-07T01:27:17.556657Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 20c02d6a-334e-4b64-8773-3c09478c77b7 correlation 9ad8cf5d-6301-40f6-9739-cd525918ff5f created: 2026-03-07T01:26:16.021965Z] Mar 7 01:27:17.558557 waagent[1908]: 2026-03-07T01:27:17.557335Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 01:27:17.558557 waagent[1908]: 2026-03-07T01:27:17.557860Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 7 01:27:17.590855 waagent[1908]: 2026-03-07T01:27:17.590737Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F1D677EF-1B04-4AB8-88F6-33C18AFB207D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 7 01:27:17.603705 waagent[1908]: 2026-03-07T01:27:17.603642Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 01:27:17.603705 waagent[1908]: Executing ['ip', '-a', '-o', 'link']: Mar 7 01:27:17.603705 waagent[1908]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 01:27:17.603705 waagent[1908]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c7:49:c5 brd ff:ff:ff:ff:ff:ff Mar 7 01:27:17.603705 waagent[1908]: 3: enP15401s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c7:49:c5 brd ff:ff:ff:ff:ff:ff\ altname enP15401p0s2 Mar 7 01:27:17.603705 waagent[1908]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 01:27:17.603705 waagent[1908]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 01:27:17.603705 waagent[1908]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 01:27:17.603705 waagent[1908]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 01:27:17.603705 waagent[1908]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 01:27:17.603705 waagent[1908]: 2: eth0 inet6 fe80::7eed:8dff:fec7:49c5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 01:27:17.681047 waagent[1908]: 2026-03-07T01:27:17.680961Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 7 01:27:17.681047 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.681047 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.681047 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.681047 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.681047 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.681047 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.681047 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:27:17.681047 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:27:17.681047 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:27:17.683873 waagent[1908]: 2026-03-07T01:27:17.683812Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 01:27:17.683873 waagent[1908]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.683873 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.683873 waagent[1908]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.683873 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.683873 waagent[1908]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:27:17.683873 waagent[1908]: pkts bytes target prot opt in out source destination Mar 7 01:27:17.683873 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:27:17.683873 waagent[1908]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:27:17.683873 waagent[1908]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:27:17.684147 waagent[1908]: 2026-03-07T01:27:17.684113Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 01:27:22.520246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:27:22.528171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:22.630453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:22.635923 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:27:22.755025 kubelet[2136]: E0307 01:27:22.754969 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:27:22.758525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:27:22.758662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:27:32.085339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:27:32.087120 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:59556.service - OpenSSH per-connection server daemon (10.200.16.10:59556). Mar 7 01:27:32.616984 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 59556 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:32.618049 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:32.621508 systemd-logind[1711]: New session 3 of user core. Mar 7 01:27:32.629117 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:27:33.009080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:27:33.014273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:33.047476 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:59570.service - OpenSSH per-connection server daemon (10.200.16.10:59570). Mar 7 01:27:33.300971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:33.305464 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:27:33.336316 kubelet[2158]: E0307 01:27:33.336257 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:27:33.339306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:27:33.339448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:27:33.532479 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 59570 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:33.533775 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:33.538035 systemd-logind[1711]: New session 4 of user core. Mar 7 01:27:33.548152 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:27:33.883958 sshd[2151]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:33.887932 systemd-logind[1711]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:27:33.888156 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:59570.service: Deactivated successfully. Mar 7 01:27:33.889881 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:27:33.891739 systemd-logind[1711]: Removed session 4. Mar 7 01:27:33.970545 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:59582.service - OpenSSH per-connection server daemon (10.200.16.10:59582). Mar 7 01:27:34.359672 chronyd[1694]: Selected source PHC0 Mar 7 01:27:34.456241 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 59582 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:34.457498 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:34.460870 systemd-logind[1711]: New session 5 of user core. Mar 7 01:27:34.471121 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:27:34.805183 sshd[2169]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:34.809252 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:59582.service: Deactivated successfully. Mar 7 01:27:34.810716 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:27:34.811323 systemd-logind[1711]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:27:34.812050 systemd-logind[1711]: Removed session 5. Mar 7 01:27:34.892283 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:59598.service - OpenSSH per-connection server daemon (10.200.16.10:59598). Mar 7 01:27:35.375961 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 59598 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:35.377261 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:35.380666 systemd-logind[1711]: New session 6 of user core. Mar 7 01:27:35.388123 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:27:35.726199 sshd[2176]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:35.729790 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:59598.service: Deactivated successfully. Mar 7 01:27:35.731395 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:27:35.732083 systemd-logind[1711]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:27:35.733032 systemd-logind[1711]: Removed session 6. Mar 7 01:27:35.812174 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:59614.service - OpenSSH per-connection server daemon (10.200.16.10:59614). Mar 7 01:27:36.298096 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 59614 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:36.299368 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:36.302818 systemd-logind[1711]: New session 7 of user core. Mar 7 01:27:36.313153 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:27:36.683866 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:27:36.684226 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:27:36.713777 sudo[2186]: pam_unix(sudo:session): session closed for user root Mar 7 01:27:36.790598 sshd[2183]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:36.793904 systemd-logind[1711]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:27:36.794202 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:59614.service: Deactivated successfully. Mar 7 01:27:36.796367 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:27:36.798066 systemd-logind[1711]: Removed session 7. Mar 7 01:27:36.879551 systemd[1]: Started sshd@5-10.200.20.41:22-10.200.16.10:59622.service - OpenSSH per-connection server daemon (10.200.16.10:59622). Mar 7 01:27:37.365032 sshd[2191]: Accepted publickey for core from 10.200.16.10 port 59622 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:37.366377 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:37.369936 systemd-logind[1711]: New session 8 of user core. Mar 7 01:27:37.379279 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:27:37.640095 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:27:37.640414 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:27:37.643726 sudo[2195]: pam_unix(sudo:session): session closed for user root Mar 7 01:27:37.648409 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:27:37.648667 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:27:37.661203 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:27:37.662573 auditctl[2198]: No rules Mar 7 01:27:37.663654 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:27:37.663866 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:27:37.665928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:27:37.688295 augenrules[2216]: No rules Mar 7 01:27:37.689780 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:27:37.692217 sudo[2194]: pam_unix(sudo:session): session closed for user root Mar 7 01:27:37.769044 sshd[2191]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:37.772466 systemd[1]: sshd@5-10.200.20.41:22-10.200.16.10:59622.service: Deactivated successfully. Mar 7 01:27:37.773873 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:27:37.774447 systemd-logind[1711]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:27:37.775453 systemd-logind[1711]: Removed session 8. Mar 7 01:27:37.856840 systemd[1]: Started sshd@6-10.200.20.41:22-10.200.16.10:59632.service - OpenSSH per-connection server daemon (10.200.16.10:59632). Mar 7 01:27:38.344898 sshd[2224]: Accepted publickey for core from 10.200.16.10 port 59632 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:27:38.345696 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:38.349259 systemd-logind[1711]: New session 9 of user core. Mar 7 01:27:38.358217 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:27:38.619208 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:27:38.619478 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:27:39.596353 (dockerd)[2242]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:27:39.596383 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:27:40.307368 dockerd[2242]: time="2026-03-07T01:27:40.306544534Z" level=info msg="Starting up" Mar 7 01:27:40.559282 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3704160000-merged.mount: Deactivated successfully. Mar 7 01:27:40.598071 dockerd[2242]: time="2026-03-07T01:27:40.597840395Z" level=info msg="Loading containers: start." Mar 7 01:27:40.775008 kernel: Initializing XFRM netlink socket Mar 7 01:27:40.933527 systemd-networkd[1356]: docker0: Link UP Mar 7 01:27:40.963560 dockerd[2242]: time="2026-03-07T01:27:40.963014042Z" level=info msg="Loading containers: done." Mar 7 01:27:40.980041 dockerd[2242]: time="2026-03-07T01:27:40.979981716Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:27:40.980288 dockerd[2242]: time="2026-03-07T01:27:40.980270316Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:27:40.980468 dockerd[2242]: time="2026-03-07T01:27:40.980450557Z" level=info msg="Daemon has completed initialization" Mar 7 01:27:41.048615 dockerd[2242]: time="2026-03-07T01:27:41.048560412Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:27:41.049348 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:27:41.432640 containerd[1739]: time="2026-03-07T01:27:41.432590337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:27:42.321823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505387759.mount: Deactivated successfully. Mar 7 01:27:43.583088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:27:43.592172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:43.596815 containerd[1739]: time="2026-03-07T01:27:43.595720447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:43.599451 containerd[1739]: time="2026-03-07T01:27:43.599422060Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 7 01:27:43.603424 containerd[1739]: time="2026-03-07T01:27:43.603385713Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:43.618249 containerd[1739]: time="2026-03-07T01:27:43.617691002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:43.619599 containerd[1739]: time="2026-03-07T01:27:43.619082207Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.186441949s" Mar 7 01:27:43.619806 containerd[1739]: time="2026-03-07T01:27:43.619788489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 01:27:43.620815 containerd[1739]: time="2026-03-07T01:27:43.620794173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:27:43.692591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:43.697084 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:27:43.805099 kubelet[2444]: E0307 01:27:43.805056 2444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:27:43.810215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:27:43.810362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:27:45.419812 containerd[1739]: time="2026-03-07T01:27:45.419764941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:45.423437 containerd[1739]: time="2026-03-07T01:27:45.423408033Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 7 01:27:45.427191 containerd[1739]: time="2026-03-07T01:27:45.427149806Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:45.433906 containerd[1739]: time="2026-03-07T01:27:45.432766186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:45.433906 containerd[1739]: time="2026-03-07T01:27:45.433770669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.812844376s" Mar 7 01:27:45.433906 containerd[1739]: time="2026-03-07T01:27:45.433800709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 01:27:45.434430 containerd[1739]: time="2026-03-07T01:27:45.434399831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:27:46.999029 containerd[1739]: time="2026-03-07T01:27:46.998876855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:47.002149 containerd[1739]: time="2026-03-07T01:27:47.001929782Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 7 01:27:47.007912 containerd[1739]: time="2026-03-07T01:27:47.007556114Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:47.013272 containerd[1739]: time="2026-03-07T01:27:47.013242806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:47.014316 containerd[1739]: time="2026-03-07T01:27:47.014282889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.579846658s" Mar 7 01:27:47.014375 containerd[1739]: time="2026-03-07T01:27:47.014314769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 01:27:47.014780 containerd[1739]: time="2026-03-07T01:27:47.014755650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:27:48.157855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214981301.mount: Deactivated successfully. Mar 7 01:27:48.472347 containerd[1739]: time="2026-03-07T01:27:48.471909114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:48.475838 containerd[1739]: time="2026-03-07T01:27:48.475801242Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 7 01:27:48.479784 containerd[1739]: time="2026-03-07T01:27:48.479535290Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:48.484039 containerd[1739]: time="2026-03-07T01:27:48.483977140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:48.484953 containerd[1739]: time="2026-03-07T01:27:48.484751022Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.469963772s" Mar 7 01:27:48.484953 containerd[1739]: time="2026-03-07T01:27:48.484782102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 01:27:48.485569 containerd[1739]: time="2026-03-07T01:27:48.485226943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:27:49.155244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897640722.mount: Deactivated successfully. Mar 7 01:27:50.159191 containerd[1739]: time="2026-03-07T01:27:50.159139720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.163046 containerd[1739]: time="2026-03-07T01:27:50.162759728Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 7 01:27:50.167008 containerd[1739]: time="2026-03-07T01:27:50.166636057Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.172878 containerd[1739]: time="2026-03-07T01:27:50.172837950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.174017 containerd[1739]: time="2026-03-07T01:27:50.173970753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.68871437s" Mar 7 01:27:50.174080 containerd[1739]: time="2026-03-07T01:27:50.174015953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 01:27:50.174510 containerd[1739]: time="2026-03-07T01:27:50.174483914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:27:50.772723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557638099.mount: Deactivated successfully. Mar 7 01:27:50.796021 containerd[1739]: time="2026-03-07T01:27:50.795542769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.799972 containerd[1739]: time="2026-03-07T01:27:50.799712417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 01:27:50.805235 containerd[1739]: time="2026-03-07T01:27:50.803961505Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.809394 containerd[1739]: time="2026-03-07T01:27:50.808531633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:50.809394 containerd[1739]: time="2026-03-07T01:27:50.809292954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 634.77856ms" Mar 7 01:27:50.809394 containerd[1739]: time="2026-03-07T01:27:50.809319194Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 01:27:50.810244 containerd[1739]: time="2026-03-07T01:27:50.810216876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:27:51.479984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539234832.mount: Deactivated successfully. Mar 7 01:27:53.773021 containerd[1739]: time="2026-03-07T01:27:53.771916809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:53.775716 containerd[1739]: time="2026-03-07T01:27:53.775683216Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 7 01:27:53.779337 containerd[1739]: time="2026-03-07T01:27:53.779291622Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:53.785882 containerd[1739]: time="2026-03-07T01:27:53.785832034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:27:53.787267 containerd[1739]: time="2026-03-07T01:27:53.786898596Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.9766484s" Mar 7 01:27:53.787267 containerd[1739]: time="2026-03-07T01:27:53.786934516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 01:27:53.833764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:27:53.843409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:53.953633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:53.958246 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:27:53.990538 kubelet[2605]: E0307 01:27:53.990479 2605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:27:53.993346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:27:53.993588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:27:54.344454 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 7 01:27:56.715138 update_engine[1717]: I20260307 01:27:56.715076 1717 update_attempter.cc:509] Updating boot flags... Mar 7 01:27:56.775043 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2634) Mar 7 01:27:58.822463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:58.831397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:58.873209 systemd[1]: Reloading requested from client PID 2668 ('systemctl') (unit session-9.scope)... Mar 7 01:27:58.873225 systemd[1]: Reloading... Mar 7 01:27:58.970014 zram_generator::config[2708]: No configuration found. Mar 7 01:27:59.068416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:27:59.146166 systemd[1]: Reloading finished in 272 ms. Mar 7 01:27:59.197580 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:27:59.197677 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:27:59.197998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:59.200719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:27:59.363710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:27:59.368605 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:27:59.401967 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:27:59.402394 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:27:59.402456 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:27:59.402609 kubelet[2776]: I0307 01:27:59.402577 2776 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:28:00.228015 kubelet[2776]: I0307 01:28:00.227817 2776 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:28:00.228015 kubelet[2776]: I0307 01:28:00.227851 2776 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:28:00.228169 kubelet[2776]: I0307 01:28:00.228106 2776 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:28:00.249193 kubelet[2776]: E0307 01:28:00.249148 2776 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:28:00.250677 kubelet[2776]: I0307 01:28:00.250649 2776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:28:00.258071 kubelet[2776]: E0307 01:28:00.258025 2776 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:28:00.258071 kubelet[2776]: I0307 01:28:00.258055 2776 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:28:00.261269 kubelet[2776]: I0307 01:28:00.261245 2776 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:28:00.263232 kubelet[2776]: I0307 01:28:00.263190 2776 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:28:00.263396 kubelet[2776]: I0307 01:28:00.263235 2776 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-24b0a814a4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:28:00.263486 kubelet[2776]: I0307 01:28:00.263400 2776 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:28:00.263486 kubelet[2776]: I0307 01:28:00.263409 2776 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:28:00.263562 kubelet[2776]: I0307 01:28:00.263546 2776 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:28:00.266348 kubelet[2776]: I0307 01:28:00.266325 2776 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:28:00.266391 kubelet[2776]: I0307 01:28:00.266358 2776 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:28:00.266391 kubelet[2776]: I0307 01:28:00.266380 2776 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:28:00.267884 kubelet[2776]: I0307 01:28:00.267499 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:28:00.275081 kubelet[2776]: E0307 01:28:00.275043 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:28:00.275178 kubelet[2776]: I0307 01:28:00.275149 2776 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:28:00.275761 kubelet[2776]: I0307 01:28:00.275738 2776 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:28:00.275829 kubelet[2776]: W0307 01:28:00.275814 2776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:28:00.279041 kubelet[2776]: I0307 01:28:00.278901 2776 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:28:00.279041 kubelet[2776]: I0307 01:28:00.278943 2776 server.go:1289] "Started kubelet" Mar 7 01:28:00.281098 kubelet[2776]: E0307 01:28:00.281069 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-24b0a814a4&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:28:00.281167 kubelet[2776]: I0307 01:28:00.281131 2776 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:28:00.281720 kubelet[2776]: I0307 01:28:00.281263 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:28:00.281720 kubelet[2776]: I0307 01:28:00.281598 2776 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:28:00.285283 kubelet[2776]: I0307 01:28:00.285256 2776 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:28:00.285905 kubelet[2776]: I0307 01:28:00.285889 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:28:00.287259 kubelet[2776]: E0307 01:28:00.286134 2776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-24b0a814a4.189a6ad83c2f2cdb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-24b0a814a4,UID:ci-4081.3.6-n-24b0a814a4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-24b0a814a4,},FirstTimestamp:2026-03-07 01:28:00.278916315 +0000 UTC m=+0.907048650,LastTimestamp:2026-03-07 01:28:00.278916315 +0000 UTC m=+0.907048650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-24b0a814a4,}" Mar 7 01:28:00.287616 kubelet[2776]: I0307 01:28:00.287589 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:28:00.288953 kubelet[2776]: E0307 01:28:00.288930 2776 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" Mar 7 01:28:00.289094 kubelet[2776]: I0307 01:28:00.288969 2776 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:28:00.289897 kubelet[2776]: I0307 01:28:00.289172 2776 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:28:00.289897 kubelet[2776]: I0307 01:28:00.289240 2776 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:28:00.289897 kubelet[2776]: E0307 01:28:00.289550 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:28:00.291030 kubelet[2776]: E0307 01:28:00.290954 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-24b0a814a4?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms" Mar 7 01:28:00.291474 kubelet[2776]: I0307 01:28:00.291453 2776 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:28:00.291557 kubelet[2776]: I0307 01:28:00.291539 2776 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:28:00.292256 kubelet[2776]: I0307 01:28:00.292236 2776 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:28:00.307248 kubelet[2776]: I0307 01:28:00.307211 2776 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:28:00.308239 kubelet[2776]: I0307 01:28:00.308225 2776 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:28:00.308311 kubelet[2776]: I0307 01:28:00.308303 2776 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:28:00.308378 kubelet[2776]: I0307 01:28:00.308369 2776 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:28:00.308424 kubelet[2776]: I0307 01:28:00.308417 2776 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:28:00.308507 kubelet[2776]: E0307 01:28:00.308493 2776 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:28:00.315288 kubelet[2776]: E0307 01:28:00.315207 2776 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:28:00.315946 kubelet[2776]: E0307 01:28:00.315920 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:28:00.389889 kubelet[2776]: E0307 01:28:00.389850 2776 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" Mar 7 01:28:00.409066 kubelet[2776]: E0307 01:28:00.409039 2776 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:28:00.421209 kubelet[2776]: I0307 01:28:00.421187 2776 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:28:00.421209 kubelet[2776]: I0307 01:28:00.421204 2776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:28:00.421317 kubelet[2776]: I0307 01:28:00.421223 2776 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:28:00.430189 kubelet[2776]: I0307 01:28:00.430165 2776 policy_none.go:49] "None policy: Start" Mar 7 01:28:00.430189 kubelet[2776]: I0307 01:28:00.430190 2776 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:28:00.430299 kubelet[2776]: I0307 01:28:00.430202 2776 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:28:00.441186 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:28:00.451699 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:28:00.462795 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:28:00.464055 kubelet[2776]: E0307 01:28:00.464030 2776 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:28:00.464247 kubelet[2776]: I0307 01:28:00.464231 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:28:00.464280 kubelet[2776]: I0307 01:28:00.464248 2776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:28:00.465864 kubelet[2776]: I0307 01:28:00.464932 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:28:00.466434 kubelet[2776]: E0307 01:28:00.466410 2776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:28:00.466511 kubelet[2776]: E0307 01:28:00.466449 2776 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-24b0a814a4\" not found" Mar 7 01:28:00.492460 kubelet[2776]: E0307 01:28:00.492341 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-24b0a814a4?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms" Mar 7 01:28:00.566524 kubelet[2776]: I0307 01:28:00.566492 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.566832 kubelet[2776]: E0307 01:28:00.566808 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.624530 systemd[1]: Created slice kubepods-burstable-pod10c8bb68bce596ad1e20a6a43daa496e.slice - libcontainer container kubepods-burstable-pod10c8bb68bce596ad1e20a6a43daa496e.slice. Mar 7 01:28:00.635172 kubelet[2776]: E0307 01:28:00.635139 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.639487 systemd[1]: Created slice kubepods-burstable-pod5a2eeaae937b0976a508bc832e76a281.slice - libcontainer container kubepods-burstable-pod5a2eeaae937b0976a508bc832e76a281.slice. Mar 7 01:28:00.641595 kubelet[2776]: E0307 01:28:00.641552 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.653042 systemd[1]: Created slice kubepods-burstable-podb7770267a74b8a80b903f5294fea86d4.slice - libcontainer container kubepods-burstable-podb7770267a74b8a80b903f5294fea86d4.slice. Mar 7 01:28:00.654759 kubelet[2776]: E0307 01:28:00.654590 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691019 kubelet[2776]: I0307 01:28:00.690786 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691019 kubelet[2776]: I0307 01:28:00.690834 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691019 kubelet[2776]: I0307 01:28:00.690860 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691019 kubelet[2776]: I0307 01:28:00.690879 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691019 kubelet[2776]: I0307 01:28:00.690920 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691226 kubelet[2776]: I0307 01:28:00.690936 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691226 kubelet[2776]: I0307 01:28:00.690952 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691226 kubelet[2776]: I0307 01:28:00.690967 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7770267a74b8a80b903f5294fea86d4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-24b0a814a4\" (UID: \"b7770267a74b8a80b903f5294fea86d4\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.691226 kubelet[2776]: I0307 01:28:00.690981 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.771747 kubelet[2776]: I0307 01:28:00.771100 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.771747 kubelet[2776]: E0307 01:28:00.771454 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:00.892756 kubelet[2776]: E0307 01:28:00.892719 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-24b0a814a4?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms" Mar 7 01:28:00.936826 containerd[1739]: time="2026-03-07T01:28:00.936535639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-24b0a814a4,Uid:10c8bb68bce596ad1e20a6a43daa496e,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:00.942901 containerd[1739]: time="2026-03-07T01:28:00.942666656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-24b0a814a4,Uid:5a2eeaae937b0976a508bc832e76a281,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:00.955755 containerd[1739]: time="2026-03-07T01:28:00.955721572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-24b0a814a4,Uid:b7770267a74b8a80b903f5294fea86d4,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:01.173747 kubelet[2776]: I0307 01:28:01.173710 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:01.174096 kubelet[2776]: E0307 01:28:01.174056 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:01.317606 kubelet[2776]: E0307 01:28:01.317502 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:28:01.337903 kubelet[2776]: E0307 01:28:01.337485 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:28:01.452753 kubelet[2776]: E0307 01:28:01.452649 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:28:01.536786 kubelet[2776]: E0307 01:28:01.536743 2776 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-24b0a814a4&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:28:01.579025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487932569.mount: Deactivated successfully. Mar 7 01:28:01.610998 containerd[1739]: time="2026-03-07T01:28:01.610937330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:28:01.614721 containerd[1739]: time="2026-03-07T01:28:01.614676700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 7 01:28:01.618165 containerd[1739]: time="2026-03-07T01:28:01.618130390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:28:01.622149 containerd[1739]: time="2026-03-07T01:28:01.621404879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:28:01.624915 containerd[1739]: time="2026-03-07T01:28:01.624841968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:28:01.630013 containerd[1739]: time="2026-03-07T01:28:01.629307660Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:28:01.632567 containerd[1739]: time="2026-03-07T01:28:01.632507109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:28:01.637872 containerd[1739]: time="2026-03-07T01:28:01.637829764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:28:01.638982 containerd[1739]: time="2026-03-07T01:28:01.638786846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 682.867873ms" Mar 7 01:28:01.639697 containerd[1739]: time="2026-03-07T01:28:01.639616969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 703.003809ms" Mar 7 01:28:01.640769 containerd[1739]: time="2026-03-07T01:28:01.640739372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 698.002796ms" Mar 7 01:28:01.694216 kubelet[2776]: E0307 01:28:01.694173 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-24b0a814a4?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s" Mar 7 01:28:01.976582 kubelet[2776]: I0307 01:28:01.976544 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:01.976919 kubelet[2776]: E0307 01:28:01.976889 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:02.441707 kubelet[2776]: E0307 01:28:02.441645 2776 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:28:02.445913 containerd[1739]: time="2026-03-07T01:28:02.445635141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:02.445913 containerd[1739]: time="2026-03-07T01:28:02.445714141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:02.445913 containerd[1739]: time="2026-03-07T01:28:02.445726141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.446730 containerd[1739]: time="2026-03-07T01:28:02.446577023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:02.446730 containerd[1739]: time="2026-03-07T01:28:02.446626503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:02.446730 containerd[1739]: time="2026-03-07T01:28:02.446641783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.448045 containerd[1739]: time="2026-03-07T01:28:02.447979387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.448154 containerd[1739]: time="2026-03-07T01:28:02.448111227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.452456 containerd[1739]: time="2026-03-07T01:28:02.452190599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:02.452456 containerd[1739]: time="2026-03-07T01:28:02.452242039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:02.452456 containerd[1739]: time="2026-03-07T01:28:02.452263719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.452456 containerd[1739]: time="2026-03-07T01:28:02.452378359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:02.468159 systemd[1]: Started cri-containerd-92520bb4096b887a5cb548779c9fee9a183cfbd8edbba8f6772a5e126af6307c.scope - libcontainer container 92520bb4096b887a5cb548779c9fee9a183cfbd8edbba8f6772a5e126af6307c. Mar 7 01:28:02.472155 systemd[1]: Started cri-containerd-a1aee9e774d2dac213fa9fcf05c280a54ed9bb145d8cd56a3b4bf79fde424f0b.scope - libcontainer container a1aee9e774d2dac213fa9fcf05c280a54ed9bb145d8cd56a3b4bf79fde424f0b. Mar 7 01:28:02.484165 systemd[1]: Started cri-containerd-430134b6906d5de63259bd28f6026fffa1fb5651f930b68dbf5a12aa156582e0.scope - libcontainer container 430134b6906d5de63259bd28f6026fffa1fb5651f930b68dbf5a12aa156582e0. Mar 7 01:28:02.516252 containerd[1739]: time="2026-03-07T01:28:02.516215054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-24b0a814a4,Uid:10c8bb68bce596ad1e20a6a43daa496e,Namespace:kube-system,Attempt:0,} returns sandbox id \"92520bb4096b887a5cb548779c9fee9a183cfbd8edbba8f6772a5e126af6307c\"" Mar 7 01:28:02.527606 containerd[1739]: time="2026-03-07T01:28:02.527568125Z" level=info msg="CreateContainer within sandbox \"92520bb4096b887a5cb548779c9fee9a183cfbd8edbba8f6772a5e126af6307c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:28:02.530054 containerd[1739]: time="2026-03-07T01:28:02.528838129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-24b0a814a4,Uid:b7770267a74b8a80b903f5294fea86d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1aee9e774d2dac213fa9fcf05c280a54ed9bb145d8cd56a3b4bf79fde424f0b\"" Mar 7 01:28:02.533243 containerd[1739]: time="2026-03-07T01:28:02.533211301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-24b0a814a4,Uid:5a2eeaae937b0976a508bc832e76a281,Namespace:kube-system,Attempt:0,} returns sandbox id \"430134b6906d5de63259bd28f6026fffa1fb5651f930b68dbf5a12aa156582e0\"" Mar 7 01:28:02.539750 containerd[1739]: time="2026-03-07T01:28:02.539718679Z" level=info msg="CreateContainer within sandbox \"a1aee9e774d2dac213fa9fcf05c280a54ed9bb145d8cd56a3b4bf79fde424f0b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:28:02.546531 containerd[1739]: time="2026-03-07T01:28:02.546492217Z" level=info msg="CreateContainer within sandbox \"430134b6906d5de63259bd28f6026fffa1fb5651f930b68dbf5a12aa156582e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:28:02.605205 containerd[1739]: time="2026-03-07T01:28:02.605159098Z" level=info msg="CreateContainer within sandbox \"92520bb4096b887a5cb548779c9fee9a183cfbd8edbba8f6772a5e126af6307c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b74576d0044c02ede50e1fb4af77bdd32f189071a66665067f984a47a7256cbd\"" Mar 7 01:28:02.605773 containerd[1739]: time="2026-03-07T01:28:02.605750220Z" level=info msg="StartContainer for \"b74576d0044c02ede50e1fb4af77bdd32f189071a66665067f984a47a7256cbd\"" Mar 7 01:28:02.639141 systemd[1]: Started cri-containerd-b74576d0044c02ede50e1fb4af77bdd32f189071a66665067f984a47a7256cbd.scope - libcontainer container b74576d0044c02ede50e1fb4af77bdd32f189071a66665067f984a47a7256cbd. Mar 7 01:28:02.640607 containerd[1739]: time="2026-03-07T01:28:02.640250355Z" level=info msg="CreateContainer within sandbox \"430134b6906d5de63259bd28f6026fffa1fb5651f930b68dbf5a12aa156582e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37b97dca64115214ceb0b0cbe084049f82b2d9e2fe921ed1a7dc6e62f7d7ce6f\"" Mar 7 01:28:02.640853 containerd[1739]: time="2026-03-07T01:28:02.640834196Z" level=info msg="StartContainer for \"37b97dca64115214ceb0b0cbe084049f82b2d9e2fe921ed1a7dc6e62f7d7ce6f\"" Mar 7 01:28:02.647216 containerd[1739]: time="2026-03-07T01:28:02.647174214Z" level=info msg="CreateContainer within sandbox \"a1aee9e774d2dac213fa9fcf05c280a54ed9bb145d8cd56a3b4bf79fde424f0b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c820e60ffcd05717179f37a04ba383d65b10d7967a0d230dbff0344d73b8274\"" Mar 7 01:28:02.649518 containerd[1739]: time="2026-03-07T01:28:02.649483420Z" level=info msg="StartContainer for \"8c820e60ffcd05717179f37a04ba383d65b10d7967a0d230dbff0344d73b8274\"" Mar 7 01:28:02.679543 systemd[1]: Started cri-containerd-37b97dca64115214ceb0b0cbe084049f82b2d9e2fe921ed1a7dc6e62f7d7ce6f.scope - libcontainer container 37b97dca64115214ceb0b0cbe084049f82b2d9e2fe921ed1a7dc6e62f7d7ce6f. Mar 7 01:28:02.683935 systemd[1]: Started cri-containerd-8c820e60ffcd05717179f37a04ba383d65b10d7967a0d230dbff0344d73b8274.scope - libcontainer container 8c820e60ffcd05717179f37a04ba383d65b10d7967a0d230dbff0344d73b8274. Mar 7 01:28:02.704762 containerd[1739]: time="2026-03-07T01:28:02.704213032Z" level=info msg="StartContainer for \"b74576d0044c02ede50e1fb4af77bdd32f189071a66665067f984a47a7256cbd\" returns successfully" Mar 7 01:28:02.756260 containerd[1739]: time="2026-03-07T01:28:02.756214266Z" level=info msg="StartContainer for \"37b97dca64115214ceb0b0cbe084049f82b2d9e2fe921ed1a7dc6e62f7d7ce6f\" returns successfully" Mar 7 01:28:02.756398 containerd[1739]: time="2026-03-07T01:28:02.756310106Z" level=info msg="StartContainer for \"8c820e60ffcd05717179f37a04ba383d65b10d7967a0d230dbff0344d73b8274\" returns successfully" Mar 7 01:28:03.326002 kubelet[2776]: E0307 01:28:03.324686 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:03.327389 kubelet[2776]: E0307 01:28:03.326938 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:03.330714 kubelet[2776]: E0307 01:28:03.330692 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:03.581570 kubelet[2776]: I0307 01:28:03.581173 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:04.332740 kubelet[2776]: E0307 01:28:04.332711 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:04.333071 kubelet[2776]: E0307 01:28:04.332959 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:04.355976 kubelet[2776]: E0307 01:28:04.355950 2776 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:04.867606 kubelet[2776]: E0307 01:28:04.867571 2776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-24b0a814a4\" not found" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:04.999149 kubelet[2776]: I0307 01:28:04.999115 2776 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.090044 kubelet[2776]: I0307 01:28:05.090007 2776 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.134998 kubelet[2776]: E0307 01:28:05.134576 2776 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.134998 kubelet[2776]: I0307 01:28:05.134640 2776 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.140302 kubelet[2776]: E0307 01:28:05.140046 2776 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-24b0a814a4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.140302 kubelet[2776]: I0307 01:28:05.140074 2776 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.147184 kubelet[2776]: E0307 01:28:05.147148 2776 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.274999 kubelet[2776]: I0307 01:28:05.274759 2776 apiserver.go:52] "Watching apiserver" Mar 7 01:28:05.291996 kubelet[2776]: I0307 01:28:05.290125 2776 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:28:05.334388 kubelet[2776]: I0307 01:28:05.334154 2776 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:05.338045 kubelet[2776]: E0307 01:28:05.338017 2776 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:07.429796 systemd[1]: Reloading requested from client PID 3058 ('systemctl') (unit session-9.scope)... Mar 7 01:28:07.429809 systemd[1]: Reloading... Mar 7 01:28:07.516282 zram_generator::config[3098]: No configuration found. Mar 7 01:28:07.629042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:28:07.719298 systemd[1]: Reloading finished in 289 ms. Mar 7 01:28:07.758225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:28:07.776117 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:28:07.776352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:28:07.776404 systemd[1]: kubelet.service: Consumed 1.241s CPU time, 126.3M memory peak, 0B memory swap peak. Mar 7 01:28:07.786350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:28:07.973166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:28:07.981351 (kubelet)[3162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:28:08.017010 kubelet[3162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:28:08.017010 kubelet[3162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:28:08.017010 kubelet[3162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:28:08.017010 kubelet[3162]: I0307 01:28:08.015893 3162 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:28:08.024029 kubelet[3162]: I0307 01:28:08.023837 3162 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:28:08.024029 kubelet[3162]: I0307 01:28:08.023864 3162 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:28:08.024588 kubelet[3162]: I0307 01:28:08.024286 3162 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:28:08.025637 kubelet[3162]: I0307 01:28:08.025604 3162 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:28:08.028913 kubelet[3162]: I0307 01:28:08.028739 3162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:28:08.031614 kubelet[3162]: E0307 01:28:08.031585 3162 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:28:08.031686 kubelet[3162]: I0307 01:28:08.031616 3162 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:28:08.034649 kubelet[3162]: I0307 01:28:08.034616 3162 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:28:08.035281 kubelet[3162]: I0307 01:28:08.034829 3162 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:28:08.035727 kubelet[3162]: I0307 01:28:08.034857 3162 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-24b0a814a4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:28:08.035727 kubelet[3162]: I0307 01:28:08.035656 3162 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:28:08.035727 kubelet[3162]: I0307 01:28:08.035667 3162 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:28:08.035727 kubelet[3162]: I0307 01:28:08.035718 3162 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:28:08.036035 kubelet[3162]: I0307 01:28:08.035888 3162 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:28:08.036035 kubelet[3162]: I0307 01:28:08.035907 3162 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:28:08.036035 kubelet[3162]: I0307 01:28:08.035981 3162 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:28:08.036423 kubelet[3162]: I0307 01:28:08.036118 3162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:28:08.037900 kubelet[3162]: I0307 01:28:08.037872 3162 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:28:08.038552 kubelet[3162]: I0307 01:28:08.038530 3162 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:28:08.041193 kubelet[3162]: I0307 01:28:08.041174 3162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:28:08.041322 kubelet[3162]: I0307 01:28:08.041307 3162 server.go:1289] "Started kubelet" Mar 7 01:28:08.044345 kubelet[3162]: I0307 01:28:08.044316 3162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:28:08.049148 kubelet[3162]: I0307 01:28:08.048064 3162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:28:08.050065 kubelet[3162]: I0307 01:28:08.049898 3162 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:28:08.051289 kubelet[3162]: E0307 01:28:08.051226 3162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-24b0a814a4\" not found" Mar 7 01:28:08.051933 kubelet[3162]: I0307 01:28:08.051818 3162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:28:08.052124 kubelet[3162]: I0307 01:28:08.052029 3162 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:28:08.052124 kubelet[3162]: I0307 01:28:08.052075 3162 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:28:08.052184 kubelet[3162]: I0307 01:28:08.052161 3162 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:28:08.068013 kubelet[3162]: I0307 01:28:08.067016 3162 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:28:08.068013 kubelet[3162]: I0307 01:28:08.067131 3162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:28:08.070149 kubelet[3162]: I0307 01:28:08.070106 3162 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:28:08.070884 kubelet[3162]: I0307 01:28:08.070869 3162 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:28:08.089103 kubelet[3162]: I0307 01:28:08.089054 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:28:08.092124 kubelet[3162]: I0307 01:28:08.092093 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:28:08.092124 kubelet[3162]: I0307 01:28:08.092125 3162 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:28:08.092259 kubelet[3162]: I0307 01:28:08.092145 3162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:28:08.092259 kubelet[3162]: I0307 01:28:08.092151 3162 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:28:08.092259 kubelet[3162]: E0307 01:28:08.092191 3162 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:28:08.099723 kubelet[3162]: I0307 01:28:08.099692 3162 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146287 3162 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146307 3162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146330 3162 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146455 3162 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146464 3162 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146479 3162 policy_none.go:49] "None policy: Start" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146488 3162 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146496 3162 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:28:08.147066 kubelet[3162]: I0307 01:28:08.146593 3162 state_mem.go:75] "Updated machine memory state" Mar 7 01:28:08.151193 kubelet[3162]: E0307 01:28:08.150715 3162 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:28:08.151193 kubelet[3162]: I0307 01:28:08.150887 3162 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:28:08.151193 kubelet[3162]: I0307 01:28:08.150902 3162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:28:08.151193 kubelet[3162]: I0307 01:28:08.151138 3162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:28:08.153832 kubelet[3162]: E0307 01:28:08.152709 3162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:28:08.193598 kubelet[3162]: I0307 01:28:08.193557 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.193989 kubelet[3162]: I0307 01:28:08.193966 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.194406 kubelet[3162]: I0307 01:28:08.194237 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.204927 kubelet[3162]: I0307 01:28:08.204899 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:28:08.209351 kubelet[3162]: I0307 01:28:08.209214 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:28:08.209351 kubelet[3162]: I0307 01:28:08.209242 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:28:08.252755 kubelet[3162]: I0307 01:28:08.252335 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.252755 kubelet[3162]: I0307 01:28:08.252373 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.252755 kubelet[3162]: I0307 01:28:08.252405 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.252755 kubelet[3162]: I0307 01:28:08.252424 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.252755 kubelet[3162]: I0307 01:28:08.252438 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.253492 kubelet[3162]: I0307 01:28:08.252454 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.253492 kubelet[3162]: I0307 01:28:08.252468 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10c8bb68bce596ad1e20a6a43daa496e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-24b0a814a4\" (UID: \"10c8bb68bce596ad1e20a6a43daa496e\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.253492 kubelet[3162]: I0307 01:28:08.252484 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a2eeaae937b0976a508bc832e76a281-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-24b0a814a4\" (UID: \"5a2eeaae937b0976a508bc832e76a281\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.253492 kubelet[3162]: I0307 01:28:08.252509 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7770267a74b8a80b903f5294fea86d4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-24b0a814a4\" (UID: \"b7770267a74b8a80b903f5294fea86d4\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.255950 kubelet[3162]: I0307 01:28:08.255713 3162 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.269534 kubelet[3162]: I0307 01:28:08.269463 3162 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:08.269754 kubelet[3162]: I0307 01:28:08.269555 3162 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:09.046862 kubelet[3162]: I0307 01:28:09.046825 3162 apiserver.go:52] "Watching apiserver" Mar 7 01:28:09.053091 kubelet[3162]: I0307 01:28:09.053053 3162 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:28:09.149738 kubelet[3162]: I0307 01:28:09.149571 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-24b0a814a4" podStartSLOduration=1.149557221 podStartE2EDuration="1.149557221s" podCreationTimestamp="2026-03-07 01:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:09.148748219 +0000 UTC m=+1.163908805" watchObservedRunningTime="2026-03-07 01:28:09.149557221 +0000 UTC m=+1.164717807" Mar 7 01:28:09.165343 kubelet[3162]: I0307 01:28:09.165158 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-24b0a814a4" podStartSLOduration=1.165142138 podStartE2EDuration="1.165142138s" podCreationTimestamp="2026-03-07 01:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:09.165090978 +0000 UTC m=+1.180251564" watchObservedRunningTime="2026-03-07 01:28:09.165142138 +0000 UTC m=+1.180302724" Mar 7 01:28:09.191602 kubelet[3162]: I0307 01:28:09.191441 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-24b0a814a4" podStartSLOduration=1.191414241 podStartE2EDuration="1.191414241s" podCreationTimestamp="2026-03-07 01:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:09.178032849 +0000 UTC m=+1.193193435" watchObservedRunningTime="2026-03-07 01:28:09.191414241 +0000 UTC m=+1.206574867" Mar 7 01:28:13.108617 kubelet[3162]: I0307 01:28:13.108466 3162 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:28:13.109229 containerd[1739]: time="2026-03-07T01:28:13.108766504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:28:13.109902 kubelet[3162]: I0307 01:28:13.109627 3162 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:28:14.046310 systemd[1]: Created slice kubepods-besteffort-podbe8bb779_21bd_465d_812c_1f2e7dfc7831.slice - libcontainer container kubepods-besteffort-podbe8bb779_21bd_465d_812c_1f2e7dfc7831.slice. Mar 7 01:28:14.081539 kubelet[3162]: I0307 01:28:14.081497 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be8bb779-21bd-465d-812c-1f2e7dfc7831-kube-proxy\") pod \"kube-proxy-mf7tf\" (UID: \"be8bb779-21bd-465d-812c-1f2e7dfc7831\") " pod="kube-system/kube-proxy-mf7tf" Mar 7 01:28:14.081539 kubelet[3162]: I0307 01:28:14.081539 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be8bb779-21bd-465d-812c-1f2e7dfc7831-xtables-lock\") pod \"kube-proxy-mf7tf\" (UID: \"be8bb779-21bd-465d-812c-1f2e7dfc7831\") " pod="kube-system/kube-proxy-mf7tf" Mar 7 01:28:14.081701 kubelet[3162]: I0307 01:28:14.081562 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be8bb779-21bd-465d-812c-1f2e7dfc7831-lib-modules\") pod \"kube-proxy-mf7tf\" (UID: \"be8bb779-21bd-465d-812c-1f2e7dfc7831\") " pod="kube-system/kube-proxy-mf7tf" Mar 7 01:28:14.081701 kubelet[3162]: I0307 01:28:14.081602 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdd4\" (UniqueName: \"kubernetes.io/projected/be8bb779-21bd-465d-812c-1f2e7dfc7831-kube-api-access-qwdd4\") pod \"kube-proxy-mf7tf\" (UID: \"be8bb779-21bd-465d-812c-1f2e7dfc7831\") " pod="kube-system/kube-proxy-mf7tf" Mar 7 01:28:14.282212 kubelet[3162]: I0307 01:28:14.282169 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpk5\" (UniqueName: \"kubernetes.io/projected/7d6b64dc-1ccd-4048-a049-45a90619f82b-kube-api-access-sbpk5\") pod \"tigera-operator-6bf85f8dd-qmp46\" (UID: \"7d6b64dc-1ccd-4048-a049-45a90619f82b\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qmp46" Mar 7 01:28:14.282212 kubelet[3162]: I0307 01:28:14.282209 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d6b64dc-1ccd-4048-a049-45a90619f82b-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-qmp46\" (UID: \"7d6b64dc-1ccd-4048-a049-45a90619f82b\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qmp46" Mar 7 01:28:14.287024 systemd[1]: Created slice kubepods-besteffort-pod7d6b64dc_1ccd_4048_a049_45a90619f82b.slice - libcontainer container kubepods-besteffort-pod7d6b64dc_1ccd_4048_a049_45a90619f82b.slice. Mar 7 01:28:14.357155 containerd[1739]: time="2026-03-07T01:28:14.357116788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mf7tf,Uid:be8bb779-21bd-465d-812c-1f2e7dfc7831,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:14.404182 containerd[1739]: time="2026-03-07T01:28:14.404092740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:14.404435 containerd[1739]: time="2026-03-07T01:28:14.404141860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:14.404435 containerd[1739]: time="2026-03-07T01:28:14.404328300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:14.404572 containerd[1739]: time="2026-03-07T01:28:14.404535021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:14.430247 systemd[1]: Started cri-containerd-e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f.scope - libcontainer container e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f. Mar 7 01:28:14.451034 containerd[1739]: time="2026-03-07T01:28:14.450968726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mf7tf,Uid:be8bb779-21bd-465d-812c-1f2e7dfc7831,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f\"" Mar 7 01:28:14.461484 containerd[1739]: time="2026-03-07T01:28:14.461442029Z" level=info msg="CreateContainer within sandbox \"e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:28:14.500433 containerd[1739]: time="2026-03-07T01:28:14.500268836Z" level=info msg="CreateContainer within sandbox \"e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53959ec398a9095378f3923588a25a3d0ababf7422379803fe2f593a154d29b5\"" Mar 7 01:28:14.501899 containerd[1739]: time="2026-03-07T01:28:14.501871280Z" level=info msg="StartContainer for \"53959ec398a9095378f3923588a25a3d0ababf7422379803fe2f593a154d29b5\"" Mar 7 01:28:14.527225 systemd[1]: Started cri-containerd-53959ec398a9095378f3923588a25a3d0ababf7422379803fe2f593a154d29b5.scope - libcontainer container 53959ec398a9095378f3923588a25a3d0ababf7422379803fe2f593a154d29b5. Mar 7 01:28:14.556752 containerd[1739]: time="2026-03-07T01:28:14.556709362Z" level=info msg="StartContainer for \"53959ec398a9095378f3923588a25a3d0ababf7422379803fe2f593a154d29b5\" returns successfully" Mar 7 01:28:14.591705 containerd[1739]: time="2026-03-07T01:28:14.591517440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qmp46,Uid:7d6b64dc-1ccd-4048-a049-45a90619f82b,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:28:14.641234 containerd[1739]: time="2026-03-07T01:28:14.640572589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:14.641234 containerd[1739]: time="2026-03-07T01:28:14.640637510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:14.641234 containerd[1739]: time="2026-03-07T01:28:14.640867110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:14.641234 containerd[1739]: time="2026-03-07T01:28:14.641073871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:14.659307 systemd[1]: Started cri-containerd-08ae1ffc73db85a520c40f1b53d836065b68918e2fcd293b517ad83db01f4734.scope - libcontainer container 08ae1ffc73db85a520c40f1b53d836065b68918e2fcd293b517ad83db01f4734. Mar 7 01:28:14.701061 containerd[1739]: time="2026-03-07T01:28:14.700929564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qmp46,Uid:7d6b64dc-1ccd-4048-a049-45a90619f82b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"08ae1ffc73db85a520c40f1b53d836065b68918e2fcd293b517ad83db01f4734\"" Mar 7 01:28:14.703203 containerd[1739]: time="2026-03-07T01:28:14.703174249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:28:15.173032 kubelet[3162]: I0307 01:28:15.172484 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mf7tf" podStartSLOduration=1.172469018 podStartE2EDuration="1.172469018s" podCreationTimestamp="2026-03-07 01:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:15.172275377 +0000 UTC m=+7.187435963" watchObservedRunningTime="2026-03-07 01:28:15.172469018 +0000 UTC m=+7.187629604" Mar 7 01:28:15.197358 systemd[1]: run-containerd-runc-k8s.io-e7c90b739bbde0d54661abb84c0c11fc464b49ce47ab0958bd47130b01de4e8f-runc.D4Nl1O.mount: Deactivated successfully. Mar 7 01:28:16.437218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468801817.mount: Deactivated successfully. Mar 7 01:28:16.869590 containerd[1739]: time="2026-03-07T01:28:16.869541048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:16.872754 containerd[1739]: time="2026-03-07T01:28:16.872563455Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 01:28:16.877756 containerd[1739]: time="2026-03-07T01:28:16.877576666Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:16.883773 containerd[1739]: time="2026-03-07T01:28:16.883684280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:16.885371 containerd[1739]: time="2026-03-07T01:28:16.884743202Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.181521273s" Mar 7 01:28:16.885371 containerd[1739]: time="2026-03-07T01:28:16.884785482Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 01:28:16.893273 containerd[1739]: time="2026-03-07T01:28:16.893237261Z" level=info msg="CreateContainer within sandbox \"08ae1ffc73db85a520c40f1b53d836065b68918e2fcd293b517ad83db01f4734\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:28:16.933608 containerd[1739]: time="2026-03-07T01:28:16.933516231Z" level=info msg="CreateContainer within sandbox \"08ae1ffc73db85a520c40f1b53d836065b68918e2fcd293b517ad83db01f4734\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11428577bbf8a7126d86b047d07ea28f962fba3d121b964d6a5b5af72415a39e\"" Mar 7 01:28:16.936119 containerd[1739]: time="2026-03-07T01:28:16.934085553Z" level=info msg="StartContainer for \"11428577bbf8a7126d86b047d07ea28f962fba3d121b964d6a5b5af72415a39e\"" Mar 7 01:28:16.962492 systemd[1]: Started cri-containerd-11428577bbf8a7126d86b047d07ea28f962fba3d121b964d6a5b5af72415a39e.scope - libcontainer container 11428577bbf8a7126d86b047d07ea28f962fba3d121b964d6a5b5af72415a39e. Mar 7 01:28:16.986605 containerd[1739]: time="2026-03-07T01:28:16.986560830Z" level=info msg="StartContainer for \"11428577bbf8a7126d86b047d07ea28f962fba3d121b964d6a5b5af72415a39e\" returns successfully" Mar 7 01:28:17.159777 kubelet[3162]: I0307 01:28:17.159574 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-qmp46" podStartSLOduration=0.976199059 podStartE2EDuration="3.159559776s" podCreationTimestamp="2026-03-07 01:28:14 +0000 UTC" firstStartedPulling="2026-03-07 01:28:14.702380088 +0000 UTC m=+6.717540674" lastFinishedPulling="2026-03-07 01:28:16.885740805 +0000 UTC m=+8.900901391" observedRunningTime="2026-03-07 01:28:17.158752054 +0000 UTC m=+9.173912640" watchObservedRunningTime="2026-03-07 01:28:17.159559776 +0000 UTC m=+9.174720362" Mar 7 01:28:22.777404 sudo[2227]: pam_unix(sudo:session): session closed for user root Mar 7 01:28:22.857076 sshd[2224]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:22.862966 systemd[1]: sshd@6-10.200.20.41:22-10.200.16.10:59632.service: Deactivated successfully. Mar 7 01:28:22.867290 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:28:22.867633 systemd[1]: session-9.scope: Consumed 6.607s CPU time, 152.3M memory peak, 0B memory swap peak. Mar 7 01:28:22.869199 systemd-logind[1711]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:28:22.870281 systemd-logind[1711]: Removed session 9. Mar 7 01:28:30.561590 systemd[1]: Created slice kubepods-besteffort-pod36006e4c_996e_4823_9ddf_999927d1a64b.slice - libcontainer container kubepods-besteffort-pod36006e4c_996e_4823_9ddf_999927d1a64b.slice. Mar 7 01:28:30.582542 kubelet[3162]: I0307 01:28:30.582500 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxdg\" (UniqueName: \"kubernetes.io/projected/36006e4c-996e-4823-9ddf-999927d1a64b-kube-api-access-zdxdg\") pod \"calico-typha-679c6bc859-98ghq\" (UID: \"36006e4c-996e-4823-9ddf-999927d1a64b\") " pod="calico-system/calico-typha-679c6bc859-98ghq" Mar 7 01:28:30.582542 kubelet[3162]: I0307 01:28:30.582549 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36006e4c-996e-4823-9ddf-999927d1a64b-tigera-ca-bundle\") pod \"calico-typha-679c6bc859-98ghq\" (UID: \"36006e4c-996e-4823-9ddf-999927d1a64b\") " pod="calico-system/calico-typha-679c6bc859-98ghq" Mar 7 01:28:30.583074 kubelet[3162]: I0307 01:28:30.582567 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36006e4c-996e-4823-9ddf-999927d1a64b-typha-certs\") pod \"calico-typha-679c6bc859-98ghq\" (UID: \"36006e4c-996e-4823-9ddf-999927d1a64b\") " pod="calico-system/calico-typha-679c6bc859-98ghq" Mar 7 01:28:30.722432 systemd[1]: Created slice kubepods-besteffort-podd12bae09_d894_4bf9_955e_ea109497d6eb.slice - libcontainer container kubepods-besteffort-podd12bae09_d894_4bf9_955e_ea109497d6eb.slice. Mar 7 01:28:30.784372 kubelet[3162]: I0307 01:28:30.784301 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-cni-net-dir\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.784372 kubelet[3162]: I0307 01:28:30.784347 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-xtables-lock\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.785543 kubelet[3162]: I0307 01:28:30.785507 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-bpffs\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786490 kubelet[3162]: I0307 01:28:30.785707 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d12bae09-d894-4bf9-955e-ea109497d6eb-node-certs\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786490 kubelet[3162]: I0307 01:28:30.785747 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-var-run-calico\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786490 kubelet[3162]: I0307 01:28:30.785770 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-flexvol-driver-host\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786490 kubelet[3162]: I0307 01:28:30.785793 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-var-lib-calico\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786490 kubelet[3162]: I0307 01:28:30.785809 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-nodeproc\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786671 kubelet[3162]: I0307 01:28:30.785837 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d12bae09-d894-4bf9-955e-ea109497d6eb-tigera-ca-bundle\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786671 kubelet[3162]: I0307 01:28:30.785874 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-cni-bin-dir\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786671 kubelet[3162]: I0307 01:28:30.785898 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-cni-log-dir\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786671 kubelet[3162]: I0307 01:28:30.785917 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-lib-modules\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786671 kubelet[3162]: I0307 01:28:30.785937 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-policysync\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786779 kubelet[3162]: I0307 01:28:30.785951 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d12bae09-d894-4bf9-955e-ea109497d6eb-sys-fs\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.786779 kubelet[3162]: I0307 01:28:30.785998 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4sg\" (UniqueName: \"kubernetes.io/projected/d12bae09-d894-4bf9-955e-ea109497d6eb-kube-api-access-cs4sg\") pod \"calico-node-rp27x\" (UID: \"d12bae09-d894-4bf9-955e-ea109497d6eb\") " pod="calico-system/calico-node-rp27x" Mar 7 01:28:30.821858 kubelet[3162]: E0307 01:28:30.820978 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:30.866064 containerd[1739]: time="2026-03-07T01:28:30.865810873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679c6bc859-98ghq,Uid:36006e4c-996e-4823-9ddf-999927d1a64b,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:30.887222 kubelet[3162]: I0307 01:28:30.887156 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf-registration-dir\") pod \"csi-node-driver-66vcb\" (UID: \"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf\") " pod="calico-system/csi-node-driver-66vcb" Mar 7 01:28:30.888306 kubelet[3162]: I0307 01:28:30.887240 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf-kubelet-dir\") pod \"csi-node-driver-66vcb\" (UID: \"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf\") " pod="calico-system/csi-node-driver-66vcb" Mar 7 01:28:30.888306 kubelet[3162]: I0307 01:28:30.887258 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf-varrun\") pod \"csi-node-driver-66vcb\" (UID: \"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf\") " pod="calico-system/csi-node-driver-66vcb" Mar 7 01:28:30.888306 kubelet[3162]: I0307 01:28:30.887273 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf-socket-dir\") pod \"csi-node-driver-66vcb\" (UID: \"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf\") " pod="calico-system/csi-node-driver-66vcb" Mar 7 01:28:30.889202 kubelet[3162]: I0307 01:28:30.889174 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtfks\" (UniqueName: \"kubernetes.io/projected/d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf-kube-api-access-rtfks\") pod \"csi-node-driver-66vcb\" (UID: \"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf\") " pod="calico-system/csi-node-driver-66vcb" Mar 7 01:28:30.899550 kubelet[3162]: E0307 01:28:30.899524 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.899550 kubelet[3162]: W0307 01:28:30.899549 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.899701 kubelet[3162]: E0307 01:28:30.899568 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.929494 kubelet[3162]: E0307 01:28:30.929087 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.929494 kubelet[3162]: W0307 01:28:30.929387 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.929494 kubelet[3162]: E0307 01:28:30.929414 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.933010 containerd[1739]: time="2026-03-07T01:28:30.932765460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:30.933010 containerd[1739]: time="2026-03-07T01:28:30.932879221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:30.933298 containerd[1739]: time="2026-03-07T01:28:30.932919981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:30.933356 containerd[1739]: time="2026-03-07T01:28:30.933140181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:30.951160 systemd[1]: Started cri-containerd-b6cc432c1a79bbda4e807da4b7ebb2fb338b414d0973d88d58b0feda4557fd36.scope - libcontainer container b6cc432c1a79bbda4e807da4b7ebb2fb338b414d0973d88d58b0feda4557fd36. Mar 7 01:28:30.979030 containerd[1739]: time="2026-03-07T01:28:30.978695882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679c6bc859-98ghq,Uid:36006e4c-996e-4823-9ddf-999927d1a64b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6cc432c1a79bbda4e807da4b7ebb2fb338b414d0973d88d58b0feda4557fd36\"" Mar 7 01:28:30.980685 containerd[1739]: time="2026-03-07T01:28:30.980634526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:28:30.989758 kubelet[3162]: E0307 01:28:30.989729 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.989758 kubelet[3162]: W0307 01:28:30.989753 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.989883 kubelet[3162]: E0307 01:28:30.989772 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.990012 kubelet[3162]: E0307 01:28:30.989984 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.990012 kubelet[3162]: W0307 01:28:30.990011 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.990077 kubelet[3162]: E0307 01:28:30.990023 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.990242 kubelet[3162]: E0307 01:28:30.990230 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.990242 kubelet[3162]: W0307 01:28:30.990241 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.990317 kubelet[3162]: E0307 01:28:30.990250 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.990469 kubelet[3162]: E0307 01:28:30.990447 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.990469 kubelet[3162]: W0307 01:28:30.990465 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.990523 kubelet[3162]: E0307 01:28:30.990476 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.990687 kubelet[3162]: E0307 01:28:30.990675 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.990733 kubelet[3162]: W0307 01:28:30.990688 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.990733 kubelet[3162]: E0307 01:28:30.990697 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.990911 kubelet[3162]: E0307 01:28:30.990899 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.990911 kubelet[3162]: W0307 01:28:30.990909 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.990978 kubelet[3162]: E0307 01:28:30.990922 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.991119 kubelet[3162]: E0307 01:28:30.991106 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.991119 kubelet[3162]: W0307 01:28:30.991117 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.991182 kubelet[3162]: E0307 01:28:30.991125 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.991428 kubelet[3162]: E0307 01:28:30.991412 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.991428 kubelet[3162]: W0307 01:28:30.991429 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.991497 kubelet[3162]: E0307 01:28:30.991439 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.991625 kubelet[3162]: E0307 01:28:30.991613 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.991625 kubelet[3162]: W0307 01:28:30.991624 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.991682 kubelet[3162]: E0307 01:28:30.991633 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.991819 kubelet[3162]: E0307 01:28:30.991808 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.991819 kubelet[3162]: W0307 01:28:30.991820 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.991884 kubelet[3162]: E0307 01:28:30.991829 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.992060 kubelet[3162]: E0307 01:28:30.992047 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.992060 kubelet[3162]: W0307 01:28:30.992058 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.992179 kubelet[3162]: E0307 01:28:30.992067 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.992252 kubelet[3162]: E0307 01:28:30.992236 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.992252 kubelet[3162]: W0307 01:28:30.992247 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.992307 kubelet[3162]: E0307 01:28:30.992255 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.992464 kubelet[3162]: E0307 01:28:30.992448 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.992464 kubelet[3162]: W0307 01:28:30.992461 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.992522 kubelet[3162]: E0307 01:28:30.992470 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.992620 kubelet[3162]: E0307 01:28:30.992607 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.992620 kubelet[3162]: W0307 01:28:30.992617 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.992673 kubelet[3162]: E0307 01:28:30.992625 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.992864 kubelet[3162]: E0307 01:28:30.992852 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.992864 kubelet[3162]: W0307 01:28:30.992862 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.992937 kubelet[3162]: E0307 01:28:30.992871 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.993145 kubelet[3162]: E0307 01:28:30.993108 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.993145 kubelet[3162]: W0307 01:28:30.993119 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.993145 kubelet[3162]: E0307 01:28:30.993129 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.993306 kubelet[3162]: E0307 01:28:30.993289 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.993306 kubelet[3162]: W0307 01:28:30.993301 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.993368 kubelet[3162]: E0307 01:28:30.993312 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.993497 kubelet[3162]: E0307 01:28:30.993485 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.993497 kubelet[3162]: W0307 01:28:30.993496 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.993556 kubelet[3162]: E0307 01:28:30.993504 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.993675 kubelet[3162]: E0307 01:28:30.993664 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.993675 kubelet[3162]: W0307 01:28:30.993674 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.993737 kubelet[3162]: E0307 01:28:30.993682 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.993840 kubelet[3162]: E0307 01:28:30.993829 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.993840 kubelet[3162]: W0307 01:28:30.993839 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.993892 kubelet[3162]: E0307 01:28:30.993847 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.994110 kubelet[3162]: E0307 01:28:30.994096 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.994110 kubelet[3162]: W0307 01:28:30.994108 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.994198 kubelet[3162]: E0307 01:28:30.994118 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.994315 kubelet[3162]: E0307 01:28:30.994294 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.994315 kubelet[3162]: W0307 01:28:30.994306 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.994315 kubelet[3162]: E0307 01:28:30.994314 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.994640 kubelet[3162]: E0307 01:28:30.994625 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.994640 kubelet[3162]: W0307 01:28:30.994637 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.994785 kubelet[3162]: E0307 01:28:30.994647 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.994858 kubelet[3162]: E0307 01:28:30.994847 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.994858 kubelet[3162]: W0307 01:28:30.994857 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.994915 kubelet[3162]: E0307 01:28:30.994865 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:30.995063 kubelet[3162]: E0307 01:28:30.995050 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:30.995109 kubelet[3162]: W0307 01:28:30.995064 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:30.995109 kubelet[3162]: E0307 01:28:30.995073 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:31.004766 kubelet[3162]: E0307 01:28:31.004747 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:31.004921 kubelet[3162]: W0307 01:28:31.004821 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:31.004921 kubelet[3162]: E0307 01:28:31.004838 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:31.028429 containerd[1739]: time="2026-03-07T01:28:31.028388831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rp27x,Uid:d12bae09-d894-4bf9-955e-ea109497d6eb,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:31.079749 containerd[1739]: time="2026-03-07T01:28:31.079497544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:31.079749 containerd[1739]: time="2026-03-07T01:28:31.079551904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:31.079749 containerd[1739]: time="2026-03-07T01:28:31.079562824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:31.081103 containerd[1739]: time="2026-03-07T01:28:31.079644384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:31.103193 systemd[1]: Started cri-containerd-abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da.scope - libcontainer container abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da. Mar 7 01:28:31.122384 containerd[1739]: time="2026-03-07T01:28:31.122338159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rp27x,Uid:d12bae09-d894-4bf9-955e-ea109497d6eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\"" Mar 7 01:28:32.094572 kubelet[3162]: E0307 01:28:32.093433 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:32.191473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238997887.mount: Deactivated successfully. Mar 7 01:28:32.705037 containerd[1739]: time="2026-03-07T01:28:32.704963610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:32.708114 containerd[1739]: time="2026-03-07T01:28:32.708083577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 01:28:32.712078 containerd[1739]: time="2026-03-07T01:28:32.712054866Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:32.717172 containerd[1739]: time="2026-03-07T01:28:32.717120517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:32.717592 containerd[1739]: time="2026-03-07T01:28:32.717561638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.736896912s" Mar 7 01:28:32.717640 containerd[1739]: time="2026-03-07T01:28:32.717592318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 01:28:32.720069 containerd[1739]: time="2026-03-07T01:28:32.719888683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:28:32.738654 containerd[1739]: time="2026-03-07T01:28:32.738619404Z" level=info msg="CreateContainer within sandbox \"b6cc432c1a79bbda4e807da4b7ebb2fb338b414d0973d88d58b0feda4557fd36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:28:32.785140 containerd[1739]: time="2026-03-07T01:28:32.785028227Z" level=info msg="CreateContainer within sandbox \"b6cc432c1a79bbda4e807da4b7ebb2fb338b414d0973d88d58b0feda4557fd36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42ee0c04e7fd62e03618cad074f412633c4fa2171f74219e6402a9e333175b17\"" Mar 7 01:28:32.785794 containerd[1739]: time="2026-03-07T01:28:32.785768188Z" level=info msg="StartContainer for \"42ee0c04e7fd62e03618cad074f412633c4fa2171f74219e6402a9e333175b17\"" Mar 7 01:28:32.816173 systemd[1]: Started cri-containerd-42ee0c04e7fd62e03618cad074f412633c4fa2171f74219e6402a9e333175b17.scope - libcontainer container 42ee0c04e7fd62e03618cad074f412633c4fa2171f74219e6402a9e333175b17. Mar 7 01:28:32.852302 containerd[1739]: time="2026-03-07T01:28:32.852255615Z" level=info msg="StartContainer for \"42ee0c04e7fd62e03618cad074f412633c4fa2171f74219e6402a9e333175b17\" returns successfully" Mar 7 01:28:33.190595 kubelet[3162]: E0307 01:28:33.190553 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.190595 kubelet[3162]: W0307 01:28:33.190584 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.190954 kubelet[3162]: E0307 01:28:33.190605 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.192235 kubelet[3162]: E0307 01:28:33.192214 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.192288 kubelet[3162]: W0307 01:28:33.192236 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.192288 kubelet[3162]: E0307 01:28:33.192281 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.192469 kubelet[3162]: E0307 01:28:33.192454 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.192469 kubelet[3162]: W0307 01:28:33.192466 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.192531 kubelet[3162]: E0307 01:28:33.192478 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.192653 kubelet[3162]: E0307 01:28:33.192638 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.192653 kubelet[3162]: W0307 01:28:33.192650 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.192709 kubelet[3162]: E0307 01:28:33.192658 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.192842 kubelet[3162]: E0307 01:28:33.192827 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.192842 kubelet[3162]: W0307 01:28:33.192839 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.192901 kubelet[3162]: E0307 01:28:33.192847 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193031 kubelet[3162]: E0307 01:28:33.193018 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193031 kubelet[3162]: W0307 01:28:33.193029 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193094 kubelet[3162]: E0307 01:28:33.193037 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193210 kubelet[3162]: E0307 01:28:33.193197 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193210 kubelet[3162]: W0307 01:28:33.193208 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193269 kubelet[3162]: E0307 01:28:33.193216 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193387 kubelet[3162]: E0307 01:28:33.193373 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193387 kubelet[3162]: W0307 01:28:33.193384 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193443 kubelet[3162]: E0307 01:28:33.193392 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193561 kubelet[3162]: E0307 01:28:33.193547 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193561 kubelet[3162]: W0307 01:28:33.193559 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193622 kubelet[3162]: E0307 01:28:33.193567 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193743 kubelet[3162]: E0307 01:28:33.193728 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193743 kubelet[3162]: W0307 01:28:33.193741 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193804 kubelet[3162]: E0307 01:28:33.193753 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.193921 kubelet[3162]: E0307 01:28:33.193906 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.193921 kubelet[3162]: W0307 01:28:33.193917 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.193977 kubelet[3162]: E0307 01:28:33.193927 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.194165 kubelet[3162]: E0307 01:28:33.194150 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.194165 kubelet[3162]: W0307 01:28:33.194163 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.194233 kubelet[3162]: E0307 01:28:33.194173 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194386 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.195042 kubelet[3162]: W0307 01:28:33.194398 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194407 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194587 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.195042 kubelet[3162]: W0307 01:28:33.194595 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194610 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194771 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.195042 kubelet[3162]: W0307 01:28:33.194777 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.195042 kubelet[3162]: E0307 01:28:33.194785 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.196428 kubelet[3162]: I0307 01:28:33.196374 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-679c6bc859-98ghq" podStartSLOduration=1.457820179 podStartE2EDuration="3.196362294s" podCreationTimestamp="2026-03-07 01:28:30 +0000 UTC" firstStartedPulling="2026-03-07 01:28:30.980118325 +0000 UTC m=+22.995278911" lastFinishedPulling="2026-03-07 01:28:32.71866044 +0000 UTC m=+24.733821026" observedRunningTime="2026-03-07 01:28:33.196045774 +0000 UTC m=+25.211206360" watchObservedRunningTime="2026-03-07 01:28:33.196362294 +0000 UTC m=+25.211522880" Mar 7 01:28:33.204244 kubelet[3162]: E0307 01:28:33.204215 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.204244 kubelet[3162]: W0307 01:28:33.204238 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.204244 kubelet[3162]: E0307 01:28:33.204257 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.204472 kubelet[3162]: E0307 01:28:33.204453 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.204472 kubelet[3162]: W0307 01:28:33.204461 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.204518 kubelet[3162]: E0307 01:28:33.204481 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.204741 kubelet[3162]: E0307 01:28:33.204725 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.204741 kubelet[3162]: W0307 01:28:33.204738 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.204908 kubelet[3162]: E0307 01:28:33.204749 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.205194 kubelet[3162]: E0307 01:28:33.205084 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.205194 kubelet[3162]: W0307 01:28:33.205101 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.205194 kubelet[3162]: E0307 01:28:33.205113 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.205359 kubelet[3162]: E0307 01:28:33.205348 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.205490 kubelet[3162]: W0307 01:28:33.205393 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.205490 kubelet[3162]: E0307 01:28:33.205405 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.205712 kubelet[3162]: E0307 01:28:33.205673 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.205712 kubelet[3162]: W0307 01:28:33.205684 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.205712 kubelet[3162]: E0307 01:28:33.205696 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.205962 kubelet[3162]: E0307 01:28:33.205946 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.205962 kubelet[3162]: W0307 01:28:33.205960 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.206167 kubelet[3162]: E0307 01:28:33.205972 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.206252 kubelet[3162]: E0307 01:28:33.206239 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.206252 kubelet[3162]: W0307 01:28:33.206250 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.206327 kubelet[3162]: E0307 01:28:33.206260 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.206475 kubelet[3162]: E0307 01:28:33.206461 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.206475 kubelet[3162]: W0307 01:28:33.206473 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.206613 kubelet[3162]: E0307 01:28:33.206483 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.206759 kubelet[3162]: E0307 01:28:33.206745 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.206759 kubelet[3162]: W0307 01:28:33.206756 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.206844 kubelet[3162]: E0307 01:28:33.206768 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.207332 kubelet[3162]: E0307 01:28:33.207317 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.207432 kubelet[3162]: W0307 01:28:33.207409 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.207560 kubelet[3162]: E0307 01:28:33.207484 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.207757 kubelet[3162]: E0307 01:28:33.207744 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.207888 kubelet[3162]: W0307 01:28:33.207817 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.207888 kubelet[3162]: E0307 01:28:33.207833 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.208255 kubelet[3162]: E0307 01:28:33.208151 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.208255 kubelet[3162]: W0307 01:28:33.208166 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.208255 kubelet[3162]: E0307 01:28:33.208177 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.208455 kubelet[3162]: E0307 01:28:33.208415 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.208455 kubelet[3162]: W0307 01:28:33.208426 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.208455 kubelet[3162]: E0307 01:28:33.208437 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.208700 kubelet[3162]: E0307 01:28:33.208685 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.208700 kubelet[3162]: W0307 01:28:33.208699 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.208858 kubelet[3162]: E0307 01:28:33.208710 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.208936 kubelet[3162]: E0307 01:28:33.208922 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.208936 kubelet[3162]: W0307 01:28:33.208933 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.209013 kubelet[3162]: E0307 01:28:33.208941 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.209166 kubelet[3162]: E0307 01:28:33.209152 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.209166 kubelet[3162]: W0307 01:28:33.209163 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.209234 kubelet[3162]: E0307 01:28:33.209172 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.209493 kubelet[3162]: E0307 01:28:33.209478 3162 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:28:33.209493 kubelet[3162]: W0307 01:28:33.209491 3162 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:28:33.209556 kubelet[3162]: E0307 01:28:33.209501 3162 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:28:33.933708 containerd[1739]: time="2026-03-07T01:28:33.933377600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:33.936562 containerd[1739]: time="2026-03-07T01:28:33.936429367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 01:28:33.940392 containerd[1739]: time="2026-03-07T01:28:33.940136815Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:33.945660 containerd[1739]: time="2026-03-07T01:28:33.945632587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:33.946810 containerd[1739]: time="2026-03-07T01:28:33.946630589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.226705106s" Mar 7 01:28:33.947100 containerd[1739]: time="2026-03-07T01:28:33.946902190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 01:28:33.955203 containerd[1739]: time="2026-03-07T01:28:33.955152048Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:28:33.998014 containerd[1739]: time="2026-03-07T01:28:33.997964143Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0\"" Mar 7 01:28:34.000040 containerd[1739]: time="2026-03-07T01:28:33.998831505Z" level=info msg="StartContainer for \"74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0\"" Mar 7 01:28:34.027144 systemd[1]: Started cri-containerd-74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0.scope - libcontainer container 74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0. Mar 7 01:28:34.057545 containerd[1739]: time="2026-03-07T01:28:34.057494314Z" level=info msg="StartContainer for \"74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0\" returns successfully" Mar 7 01:28:34.064461 systemd[1]: cri-containerd-74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0.scope: Deactivated successfully. Mar 7 01:28:34.084295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0-rootfs.mount: Deactivated successfully. Mar 7 01:28:34.093828 kubelet[3162]: E0307 01:28:34.092547 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:34.179018 kubelet[3162]: I0307 01:28:34.178126 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:28:35.163735 containerd[1739]: time="2026-03-07T01:28:35.163618954Z" level=info msg="shim disconnected" id=74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0 namespace=k8s.io Mar 7 01:28:35.163735 containerd[1739]: time="2026-03-07T01:28:35.163670914Z" level=warning msg="cleaning up after shim disconnected" id=74e31c0110038a58be2630cd5d36ba3ad47e7708356194833efa12bf43334ab0 namespace=k8s.io Mar 7 01:28:35.163735 containerd[1739]: time="2026-03-07T01:28:35.163679634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:28:35.173674 containerd[1739]: time="2026-03-07T01:28:35.173622536Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:28:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:28:35.184088 containerd[1739]: time="2026-03-07T01:28:35.184018919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:28:36.095017 kubelet[3162]: E0307 01:28:36.094743 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:38.093675 kubelet[3162]: E0307 01:28:38.093627 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:39.411718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268012562.mount: Deactivated successfully. Mar 7 01:28:39.456028 containerd[1739]: time="2026-03-07T01:28:39.455965621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:39.460018 containerd[1739]: time="2026-03-07T01:28:39.459973591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 01:28:39.464255 containerd[1739]: time="2026-03-07T01:28:39.463948400Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:39.468774 containerd[1739]: time="2026-03-07T01:28:39.468721412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:39.469906 containerd[1739]: time="2026-03-07T01:28:39.469338174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.285280654s" Mar 7 01:28:39.469906 containerd[1739]: time="2026-03-07T01:28:39.469371854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 01:28:39.478619 containerd[1739]: time="2026-03-07T01:28:39.478493076Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:28:39.520942 containerd[1739]: time="2026-03-07T01:28:39.520872659Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6\"" Mar 7 01:28:39.524956 containerd[1739]: time="2026-03-07T01:28:39.523587706Z" level=info msg="StartContainer for \"f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6\"" Mar 7 01:28:39.559164 systemd[1]: Started cri-containerd-f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6.scope - libcontainer container f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6. Mar 7 01:28:39.591644 containerd[1739]: time="2026-03-07T01:28:39.591499632Z" level=info msg="StartContainer for \"f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6\" returns successfully" Mar 7 01:28:39.628068 systemd[1]: cri-containerd-f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6.scope: Deactivated successfully. Mar 7 01:28:40.426340 kubelet[3162]: E0307 01:28:40.094232 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:40.411769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6-rootfs.mount: Deactivated successfully. Mar 7 01:28:40.978481 kubelet[3162]: I0307 01:28:40.978025 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:28:41.239605 containerd[1739]: time="2026-03-07T01:28:41.239297174Z" level=info msg="shim disconnected" id=f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6 namespace=k8s.io Mar 7 01:28:41.239605 containerd[1739]: time="2026-03-07T01:28:41.239348054Z" level=warning msg="cleaning up after shim disconnected" id=f965dd8e61aec824e76726b0c72537439b6cb50bca80d8ea0cc8736f05fdc3d6 namespace=k8s.io Mar 7 01:28:41.239605 containerd[1739]: time="2026-03-07T01:28:41.239356094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:28:42.093754 kubelet[3162]: E0307 01:28:42.092635 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:42.199624 containerd[1739]: time="2026-03-07T01:28:42.199369557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:28:44.094846 kubelet[3162]: E0307 01:28:44.094407 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:44.476077 containerd[1739]: time="2026-03-07T01:28:44.475812194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:44.479069 containerd[1739]: time="2026-03-07T01:28:44.479031802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 01:28:44.482400 containerd[1739]: time="2026-03-07T01:28:44.482357730Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:44.487284 containerd[1739]: time="2026-03-07T01:28:44.487207542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:44.488135 containerd[1739]: time="2026-03-07T01:28:44.488105664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.288694747s" Mar 7 01:28:44.488199 containerd[1739]: time="2026-03-07T01:28:44.488136384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 01:28:44.497144 containerd[1739]: time="2026-03-07T01:28:44.497106006Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:28:44.523976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168916012.mount: Deactivated successfully. Mar 7 01:28:44.539999 containerd[1739]: time="2026-03-07T01:28:44.539947270Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918\"" Mar 7 01:28:44.542250 containerd[1739]: time="2026-03-07T01:28:44.540856553Z" level=info msg="StartContainer for \"006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918\"" Mar 7 01:28:44.575147 systemd[1]: Started cri-containerd-006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918.scope - libcontainer container 006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918. Mar 7 01:28:44.608075 containerd[1739]: time="2026-03-07T01:28:44.607867156Z" level=info msg="StartContainer for \"006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918\" returns successfully" Mar 7 01:28:46.094016 kubelet[3162]: E0307 01:28:46.093273 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66vcb" podUID="d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf" Mar 7 01:28:46.666653 systemd[1]: cri-containerd-006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918.scope: Deactivated successfully. Mar 7 01:28:46.689201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918-rootfs.mount: Deactivated successfully. Mar 7 01:28:46.693583 kubelet[3162]: I0307 01:28:46.692286 3162 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:28:46.701722 containerd[1739]: time="2026-03-07T01:28:46.701661154Z" level=info msg="shim disconnected" id=006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918 namespace=k8s.io Mar 7 01:28:46.701722 containerd[1739]: time="2026-03-07T01:28:46.701714154Z" level=warning msg="cleaning up after shim disconnected" id=006f9c362af40c12eeb0cc1253e558a73d97d0f5562a3dcecd3cf3d679beb918 namespace=k8s.io Mar 7 01:28:46.701722 containerd[1739]: time="2026-03-07T01:28:46.701722755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:28:46.752824 systemd[1]: Created slice kubepods-burstable-podf1d81093_838f_42f2_bd2e_44a2be2ca4cf.slice - libcontainer container kubepods-burstable-podf1d81093_838f_42f2_bd2e_44a2be2ca4cf.slice. Mar 7 01:28:46.773602 systemd[1]: Created slice kubepods-besteffort-podb138cb7e_09ad_4994_8633_ecd968afa99c.slice - libcontainer container kubepods-besteffort-podb138cb7e_09ad_4994_8633_ecd968afa99c.slice. Mar 7 01:28:46.782766 systemd[1]: Created slice kubepods-burstable-pod071a7194_ad19_4e52_9195_4caf3f158140.slice - libcontainer container kubepods-burstable-pod071a7194_ad19_4e52_9195_4caf3f158140.slice. Mar 7 01:28:46.795712 systemd[1]: Created slice kubepods-besteffort-pod90c77c2c_38f6_4e7b_a0d0_324bcdac7ea5.slice - libcontainer container kubepods-besteffort-pod90c77c2c_38f6_4e7b_a0d0_324bcdac7ea5.slice. Mar 7 01:28:46.805241 systemd[1]: Created slice kubepods-besteffort-pode2a5fc71_1b77_4f5e_a88c_b160d32eae5f.slice - libcontainer container kubepods-besteffort-pode2a5fc71_1b77_4f5e_a88c_b160d32eae5f.slice. Mar 7 01:28:46.814431 systemd[1]: Created slice kubepods-besteffort-pod69d1a88c_b6a1_4699_84d0_d686e59af986.slice - libcontainer container kubepods-besteffort-pod69d1a88c_b6a1_4699_84d0_d686e59af986.slice. Mar 7 01:28:46.820082 systemd[1]: Created slice kubepods-besteffort-pod73e5dc10_ec6b_44b7_a83a_ebda472da3b6.slice - libcontainer container kubepods-besteffort-pod73e5dc10_ec6b_44b7_a83a_ebda472da3b6.slice. Mar 7 01:28:46.822676 kubelet[3162]: I0307 01:28:46.822638 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqbx\" (UniqueName: \"kubernetes.io/projected/73e5dc10-ec6b-44b7-a83a-ebda472da3b6-kube-api-access-gqqbx\") pod \"calico-apiserver-765cff7995-mchvs\" (UID: \"73e5dc10-ec6b-44b7-a83a-ebda472da3b6\") " pod="calico-system/calico-apiserver-765cff7995-mchvs" Mar 7 01:28:46.822676 kubelet[3162]: I0307 01:28:46.822677 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbw8\" (UniqueName: \"kubernetes.io/projected/071a7194-ad19-4e52-9195-4caf3f158140-kube-api-access-qxbw8\") pod \"coredns-674b8bbfcf-cmk8x\" (UID: \"071a7194-ad19-4e52-9195-4caf3f158140\") " pod="kube-system/coredns-674b8bbfcf-cmk8x" Mar 7 01:28:46.822822 kubelet[3162]: I0307 01:28:46.822696 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-backend-key-pair\") pod \"whisker-5b8f788665-slvv4\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:46.822822 kubelet[3162]: I0307 01:28:46.822715 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73e5dc10-ec6b-44b7-a83a-ebda472da3b6-calico-apiserver-certs\") pod \"calico-apiserver-765cff7995-mchvs\" (UID: \"73e5dc10-ec6b-44b7-a83a-ebda472da3b6\") " pod="calico-system/calico-apiserver-765cff7995-mchvs" Mar 7 01:28:46.822822 kubelet[3162]: I0307 01:28:46.822733 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b138cb7e-09ad-4994-8633-ecd968afa99c-tigera-ca-bundle\") pod \"calico-kube-controllers-7b4f64765f-kwd8x\" (UID: \"b138cb7e-09ad-4994-8633-ecd968afa99c\") " pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" Mar 7 01:28:46.822822 kubelet[3162]: I0307 01:28:46.822753 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkkk\" (UniqueName: \"kubernetes.io/projected/90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5-kube-api-access-6wkkk\") pod \"goldmane-5b85766d88-xfb4w\" (UID: \"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5\") " pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:46.822822 kubelet[3162]: I0307 01:28:46.822768 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfc4\" (UniqueName: \"kubernetes.io/projected/f1d81093-838f-42f2-bd2e-44a2be2ca4cf-kube-api-access-5rfc4\") pod \"coredns-674b8bbfcf-2vtc8\" (UID: \"f1d81093-838f-42f2-bd2e-44a2be2ca4cf\") " pod="kube-system/coredns-674b8bbfcf-2vtc8" Mar 7 01:28:46.822945 kubelet[3162]: I0307 01:28:46.822787 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnfz2\" (UniqueName: \"kubernetes.io/projected/e2a5fc71-1b77-4f5e-a88c-b160d32eae5f-kube-api-access-bnfz2\") pod \"calico-apiserver-765cff7995-szzxj\" (UID: \"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f\") " pod="calico-system/calico-apiserver-765cff7995-szzxj" Mar 7 01:28:46.822945 kubelet[3162]: I0307 01:28:46.822802 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-ca-bundle\") pod \"whisker-5b8f788665-slvv4\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:46.822945 kubelet[3162]: I0307 01:28:46.822820 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrx2\" (UniqueName: \"kubernetes.io/projected/69d1a88c-b6a1-4699-84d0-d686e59af986-kube-api-access-qkrx2\") pod \"whisker-5b8f788665-slvv4\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:46.822945 kubelet[3162]: I0307 01:28:46.822839 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5-config\") pod \"goldmane-5b85766d88-xfb4w\" (UID: \"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5\") " pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:46.822945 kubelet[3162]: I0307 01:28:46.822856 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5-goldmane-key-pair\") pod \"goldmane-5b85766d88-xfb4w\" (UID: \"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5\") " pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:46.823381 kubelet[3162]: I0307 01:28:46.822871 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2a5fc71-1b77-4f5e-a88c-b160d32eae5f-calico-apiserver-certs\") pod \"calico-apiserver-765cff7995-szzxj\" (UID: \"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f\") " pod="calico-system/calico-apiserver-765cff7995-szzxj" Mar 7 01:28:46.823381 kubelet[3162]: I0307 01:28:46.822892 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjsm\" (UniqueName: \"kubernetes.io/projected/b138cb7e-09ad-4994-8633-ecd968afa99c-kube-api-access-wmjsm\") pod \"calico-kube-controllers-7b4f64765f-kwd8x\" (UID: \"b138cb7e-09ad-4994-8633-ecd968afa99c\") " pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" Mar 7 01:28:46.823381 kubelet[3162]: I0307 01:28:46.822907 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/071a7194-ad19-4e52-9195-4caf3f158140-config-volume\") pod \"coredns-674b8bbfcf-cmk8x\" (UID: \"071a7194-ad19-4e52-9195-4caf3f158140\") " pod="kube-system/coredns-674b8bbfcf-cmk8x" Mar 7 01:28:46.823381 kubelet[3162]: I0307 01:28:46.822922 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-nginx-config\") pod \"whisker-5b8f788665-slvv4\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:46.823381 kubelet[3162]: I0307 01:28:46.822940 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xfb4w\" (UID: \"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5\") " pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:46.824164 kubelet[3162]: I0307 01:28:46.824052 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d81093-838f-42f2-bd2e-44a2be2ca4cf-config-volume\") pod \"coredns-674b8bbfcf-2vtc8\" (UID: \"f1d81093-838f-42f2-bd2e-44a2be2ca4cf\") " pod="kube-system/coredns-674b8bbfcf-2vtc8" Mar 7 01:28:47.067055 containerd[1739]: time="2026-03-07T01:28:47.066622929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vtc8,Uid:f1d81093-838f-42f2-bd2e-44a2be2ca4cf,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:47.088712 containerd[1739]: time="2026-03-07T01:28:47.088476058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4f64765f-kwd8x,Uid:b138cb7e-09ad-4994-8633-ecd968afa99c,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:47.095218 containerd[1739]: time="2026-03-07T01:28:47.095183993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmk8x,Uid:071a7194-ad19-4e52-9195-4caf3f158140,Namespace:kube-system,Attempt:0,}" Mar 7 01:28:47.102424 containerd[1739]: time="2026-03-07T01:28:47.102109048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xfb4w,Uid:90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:47.110650 containerd[1739]: time="2026-03-07T01:28:47.110373187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-szzxj,Uid:e2a5fc71-1b77-4f5e-a88c-b160d32eae5f,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:47.118566 containerd[1739]: time="2026-03-07T01:28:47.118531125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8f788665-slvv4,Uid:69d1a88c-b6a1-4699-84d0-d686e59af986,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:47.126860 containerd[1739]: time="2026-03-07T01:28:47.126817623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-mchvs,Uid:73e5dc10-ec6b-44b7-a83a-ebda472da3b6,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:47.231030 containerd[1739]: time="2026-03-07T01:28:47.230826415Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:28:47.287934 containerd[1739]: time="2026-03-07T01:28:47.287812383Z" level=error msg="Failed to destroy network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.289003 containerd[1739]: time="2026-03-07T01:28:47.288845465Z" level=error msg="encountered an error cleaning up failed sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.289247 containerd[1739]: time="2026-03-07T01:28:47.289191226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vtc8,Uid:f1d81093-838f-42f2-bd2e-44a2be2ca4cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.290242 kubelet[3162]: E0307 01:28:47.290192 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.290685 kubelet[3162]: E0307 01:28:47.290263 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vtc8" Mar 7 01:28:47.290685 kubelet[3162]: E0307 01:28:47.290282 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vtc8" Mar 7 01:28:47.290685 kubelet[3162]: E0307 01:28:47.290327 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2vtc8_kube-system(f1d81093-838f-42f2-bd2e-44a2be2ca4cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2vtc8_kube-system(f1d81093-838f-42f2-bd2e-44a2be2ca4cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2vtc8" podUID="f1d81093-838f-42f2-bd2e-44a2be2ca4cf" Mar 7 01:28:47.302858 containerd[1739]: time="2026-03-07T01:28:47.302078414Z" level=error msg="Failed to destroy network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.303658 containerd[1739]: time="2026-03-07T01:28:47.303623818Z" level=error msg="encountered an error cleaning up failed sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.304050 containerd[1739]: time="2026-03-07T01:28:47.303798378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4f64765f-kwd8x,Uid:b138cb7e-09ad-4994-8633-ecd968afa99c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.304458 kubelet[3162]: E0307 01:28:47.304410 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.304533 kubelet[3162]: E0307 01:28:47.304494 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" Mar 7 01:28:47.304558 kubelet[3162]: E0307 01:28:47.304533 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" Mar 7 01:28:47.304614 kubelet[3162]: E0307 01:28:47.304585 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4f64765f-kwd8x_calico-system(b138cb7e-09ad-4994-8633-ecd968afa99c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4f64765f-kwd8x_calico-system(b138cb7e-09ad-4994-8633-ecd968afa99c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" podUID="b138cb7e-09ad-4994-8633-ecd968afa99c" Mar 7 01:28:47.347755 containerd[1739]: time="2026-03-07T01:28:47.347708476Z" level=info msg="CreateContainer within sandbox \"abe11b4f3d9c9a7f91df33584301d117f028b570d90a04333e1cd7f0313fb8da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d56996284a41b6ac3b47895f00dd65f79a61f17f21cb8abb4e01fd7b57f18e18\"" Mar 7 01:28:47.353386 containerd[1739]: time="2026-03-07T01:28:47.353329689Z" level=info msg="StartContainer for \"d56996284a41b6ac3b47895f00dd65f79a61f17f21cb8abb4e01fd7b57f18e18\"" Mar 7 01:28:47.423208 systemd[1]: Started cri-containerd-d56996284a41b6ac3b47895f00dd65f79a61f17f21cb8abb4e01fd7b57f18e18.scope - libcontainer container d56996284a41b6ac3b47895f00dd65f79a61f17f21cb8abb4e01fd7b57f18e18. Mar 7 01:28:47.437163 containerd[1739]: time="2026-03-07T01:28:47.437095956Z" level=error msg="Failed to destroy network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.437908 containerd[1739]: time="2026-03-07T01:28:47.437780517Z" level=error msg="encountered an error cleaning up failed sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.437908 containerd[1739]: time="2026-03-07T01:28:47.437839437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmk8x,Uid:071a7194-ad19-4e52-9195-4caf3f158140,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.439875 kubelet[3162]: E0307 01:28:47.439734 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.439875 kubelet[3162]: E0307 01:28:47.439814 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cmk8x" Mar 7 01:28:47.439875 kubelet[3162]: E0307 01:28:47.439857 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cmk8x" Mar 7 01:28:47.440052 kubelet[3162]: E0307 01:28:47.439930 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cmk8x_kube-system(071a7194-ad19-4e52-9195-4caf3f158140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cmk8x_kube-system(071a7194-ad19-4e52-9195-4caf3f158140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cmk8x" podUID="071a7194-ad19-4e52-9195-4caf3f158140" Mar 7 01:28:47.466838 containerd[1739]: time="2026-03-07T01:28:47.466645422Z" level=error msg="Failed to destroy network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.467461 containerd[1739]: time="2026-03-07T01:28:47.467260743Z" level=error msg="encountered an error cleaning up failed sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.467461 containerd[1739]: time="2026-03-07T01:28:47.467317543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8f788665-slvv4,Uid:69d1a88c-b6a1-4699-84d0-d686e59af986,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.467627 kubelet[3162]: E0307 01:28:47.467539 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.467627 kubelet[3162]: E0307 01:28:47.467590 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:47.467627 kubelet[3162]: E0307 01:28:47.467609 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8f788665-slvv4" Mar 7 01:28:47.467737 kubelet[3162]: E0307 01:28:47.467656 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b8f788665-slvv4_calico-system(69d1a88c-b6a1-4699-84d0-d686e59af986)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b8f788665-slvv4_calico-system(69d1a88c-b6a1-4699-84d0-d686e59af986)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8f788665-slvv4" podUID="69d1a88c-b6a1-4699-84d0-d686e59af986" Mar 7 01:28:47.476962 containerd[1739]: time="2026-03-07T01:28:47.476884285Z" level=error msg="Failed to destroy network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.477567 containerd[1739]: time="2026-03-07T01:28:47.477173645Z" level=error msg="Failed to destroy network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.477567 containerd[1739]: time="2026-03-07T01:28:47.477433606Z" level=error msg="encountered an error cleaning up failed sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.477567 containerd[1739]: time="2026-03-07T01:28:47.477479286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xfb4w,Uid:90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.477715 kubelet[3162]: E0307 01:28:47.477652 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.477715 kubelet[3162]: E0307 01:28:47.477705 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:47.477783 kubelet[3162]: E0307 01:28:47.477726 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xfb4w" Mar 7 01:28:47.477807 kubelet[3162]: E0307 01:28:47.477770 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-xfb4w_calico-system(90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-xfb4w_calico-system(90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xfb4w" podUID="90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5" Mar 7 01:28:47.478809 containerd[1739]: time="2026-03-07T01:28:47.478774049Z" level=error msg="encountered an error cleaning up failed sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.479102 containerd[1739]: time="2026-03-07T01:28:47.478916929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-szzxj,Uid:e2a5fc71-1b77-4f5e-a88c-b160d32eae5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.479877 kubelet[3162]: E0307 01:28:47.479272 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.479877 kubelet[3162]: E0307 01:28:47.479316 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-765cff7995-szzxj" Mar 7 01:28:47.479877 kubelet[3162]: E0307 01:28:47.479333 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-765cff7995-szzxj" Mar 7 01:28:47.480161 kubelet[3162]: E0307 01:28:47.479379 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765cff7995-szzxj_calico-system(e2a5fc71-1b77-4f5e-a88c-b160d32eae5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765cff7995-szzxj_calico-system(e2a5fc71-1b77-4f5e-a88c-b160d32eae5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-765cff7995-szzxj" podUID="e2a5fc71-1b77-4f5e-a88c-b160d32eae5f" Mar 7 01:28:47.488006 containerd[1739]: time="2026-03-07T01:28:47.487943309Z" level=error msg="Failed to destroy network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.488620 containerd[1739]: time="2026-03-07T01:28:47.488590311Z" level=error msg="encountered an error cleaning up failed sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.488741 containerd[1739]: time="2026-03-07T01:28:47.488719711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-mchvs,Uid:73e5dc10-ec6b-44b7-a83a-ebda472da3b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.489046 kubelet[3162]: E0307 01:28:47.489009 3162 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:28:47.489119 kubelet[3162]: E0307 01:28:47.489061 3162 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-765cff7995-mchvs" Mar 7 01:28:47.489119 kubelet[3162]: E0307 01:28:47.489079 3162 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-765cff7995-mchvs" Mar 7 01:28:47.489173 kubelet[3162]: E0307 01:28:47.489124 3162 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-765cff7995-mchvs_calico-system(73e5dc10-ec6b-44b7-a83a-ebda472da3b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-765cff7995-mchvs_calico-system(73e5dc10-ec6b-44b7-a83a-ebda472da3b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-765cff7995-mchvs" podUID="73e5dc10-ec6b-44b7-a83a-ebda472da3b6" Mar 7 01:28:47.502927 containerd[1739]: time="2026-03-07T01:28:47.502879983Z" level=info msg="StartContainer for \"d56996284a41b6ac3b47895f00dd65f79a61f17f21cb8abb4e01fd7b57f18e18\" returns successfully" Mar 7 01:28:48.098887 systemd[1]: Created slice kubepods-besteffort-podd9ccc5b3_674c_44b0_9a28_c6c06c5b61cf.slice - libcontainer container kubepods-besteffort-podd9ccc5b3_674c_44b0_9a28_c6c06c5b61cf.slice. Mar 7 01:28:48.101620 containerd[1739]: time="2026-03-07T01:28:48.101261278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66vcb,Uid:d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:48.214653 kubelet[3162]: I0307 01:28:48.214614 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:28:48.216985 containerd[1739]: time="2026-03-07T01:28:48.216401775Z" level=info msg="StopPodSandbox for \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\"" Mar 7 01:28:48.216985 containerd[1739]: time="2026-03-07T01:28:48.216821296Z" level=info msg="Ensure that sandbox 370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65 in task-service has been cleanup successfully" Mar 7 01:28:48.223545 kubelet[3162]: I0307 01:28:48.223510 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:28:48.227712 containerd[1739]: time="2026-03-07T01:28:48.227677280Z" level=info msg="StopPodSandbox for \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\"" Mar 7 01:28:48.228975 containerd[1739]: time="2026-03-07T01:28:48.228944203Z" level=info msg="Ensure that sandbox 823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe in task-service has been cleanup successfully" Mar 7 01:28:48.237579 kubelet[3162]: I0307 01:28:48.237288 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:28:48.238055 containerd[1739]: time="2026-03-07T01:28:48.238019143Z" level=info msg="StopPodSandbox for \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\"" Mar 7 01:28:48.240688 kubelet[3162]: I0307 01:28:48.240657 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:28:48.242002 containerd[1739]: time="2026-03-07T01:28:48.241490111Z" level=info msg="Ensure that sandbox 5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49 in task-service has been cleanup successfully" Mar 7 01:28:48.244479 containerd[1739]: time="2026-03-07T01:28:48.243466795Z" level=info msg="StopPodSandbox for \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\"" Mar 7 01:28:48.244479 containerd[1739]: time="2026-03-07T01:28:48.243642876Z" level=info msg="Ensure that sandbox 090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf in task-service has been cleanup successfully" Mar 7 01:28:48.255444 kubelet[3162]: I0307 01:28:48.255405 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:28:48.259607 containerd[1739]: time="2026-03-07T01:28:48.259145590Z" level=info msg="StopPodSandbox for \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\"" Mar 7 01:28:48.261175 containerd[1739]: time="2026-03-07T01:28:48.261124315Z" level=info msg="Ensure that sandbox 7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01 in task-service has been cleanup successfully" Mar 7 01:28:48.262825 kubelet[3162]: I0307 01:28:48.262738 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:28:48.268031 containerd[1739]: time="2026-03-07T01:28:48.267974930Z" level=info msg="StopPodSandbox for \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\"" Mar 7 01:28:48.269646 containerd[1739]: time="2026-03-07T01:28:48.268147651Z" level=info msg="Ensure that sandbox 832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045 in task-service has been cleanup successfully" Mar 7 01:28:48.279903 kubelet[3162]: I0307 01:28:48.279875 3162 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:28:48.281965 containerd[1739]: time="2026-03-07T01:28:48.281450080Z" level=info msg="StopPodSandbox for \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\"" Mar 7 01:28:48.283878 containerd[1739]: time="2026-03-07T01:28:48.283839486Z" level=info msg="Ensure that sandbox c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7 in task-service has been cleanup successfully" Mar 7 01:28:48.393512 systemd-networkd[1356]: cali5ca627c65b4: Link UP Mar 7 01:28:48.394625 systemd-networkd[1356]: cali5ca627c65b4: Gained carrier Mar 7 01:28:48.422578 kubelet[3162]: I0307 01:28:48.421983 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rp27x" podStartSLOduration=5.056400848 podStartE2EDuration="18.421957914s" podCreationTimestamp="2026-03-07 01:28:30 +0000 UTC" firstStartedPulling="2026-03-07 01:28:31.123616281 +0000 UTC m=+23.138776867" lastFinishedPulling="2026-03-07 01:28:44.489173347 +0000 UTC m=+36.504333933" observedRunningTime="2026-03-07 01:28:48.267167048 +0000 UTC m=+40.282327634" watchObservedRunningTime="2026-03-07 01:28:48.421957914 +0000 UTC m=+40.437118500" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.173 [ERROR][4224] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.188 [INFO][4224] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0 csi-node-driver- calico-system d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf 710 0 2026-03-07 01:28:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 csi-node-driver-66vcb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5ca627c65b4 [] [] }} ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.188 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.214 [INFO][4237] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" HandleID="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Workload="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.252 [INFO][4237] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" HandleID="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Workload="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"csi-node-driver-66vcb", "timestamp":"2026-03-07 01:28:48.214886532 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003d71e0)} Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.252 [INFO][4237] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.252 [INFO][4237] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.252 [INFO][4237] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.266 [INFO][4237] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.279 [INFO][4237] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.295 [INFO][4237] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.302 [INFO][4237] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.312 [INFO][4237] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.313 [INFO][4237] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.323 [INFO][4237] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705 Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.340 [INFO][4237] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.356 [INFO][4237] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.193/26] block=192.168.34.192/26 handle="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.358 [INFO][4237] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.193/26] handle="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.358 [INFO][4237] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.449082 containerd[1739]: 2026-03-07 01:28:48.358 [INFO][4237] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.193/26] IPv6=[] ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" HandleID="k8s-pod-network.8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Workload="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.383 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"csi-node-driver-66vcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ca627c65b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.384 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.193/32] ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.384 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ca627c65b4 ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.395 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.399 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705", Pod:"csi-node-driver-66vcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ca627c65b4", MAC:"f2:75:c3:10:54:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:48.449644 containerd[1739]: 2026-03-07 01:28:48.428 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705" Namespace="calico-system" Pod="csi-node-driver-66vcb" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-csi--node--driver--66vcb-eth0" Mar 7 01:28:48.678337 containerd[1739]: time="2026-03-07T01:28:48.678003965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:48.678337 containerd[1739]: time="2026-03-07T01:28:48.678074005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:48.678337 containerd[1739]: time="2026-03-07T01:28:48.678099365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:48.678925 containerd[1739]: time="2026-03-07T01:28:48.678286406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.490 [INFO][4253] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.491 [INFO][4253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" iface="eth0" netns="/var/run/netns/cni-ee142a6e-e9df-77f5-4710-1241d6506e85" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.491 [INFO][4253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" iface="eth0" netns="/var/run/netns/cni-ee142a6e-e9df-77f5-4710-1241d6506e85" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.492 [INFO][4253] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" iface="eth0" netns="/var/run/netns/cni-ee142a6e-e9df-77f5-4710-1241d6506e85" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.492 [INFO][4253] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.492 [INFO][4253] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.662 [INFO][4377] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.662 [INFO][4377] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.662 [INFO][4377] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.682 [WARNING][4377] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.682 [INFO][4377] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.690 [INFO][4377] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.724114 containerd[1739]: 2026-03-07 01:28:48.713 [INFO][4253] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:28:48.733165 containerd[1739]: time="2026-03-07T01:28:48.727337555Z" level=info msg="TearDown network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" successfully" Mar 7 01:28:48.733165 containerd[1739]: time="2026-03-07T01:28:48.727376395Z" level=info msg="StopPodSandbox for \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" returns successfully" Mar 7 01:28:48.729144 systemd[1]: run-netns-cni\x2dee142a6e\x2de9df\x2d77f5\x2d4710\x2d1241d6506e85.mount: Deactivated successfully. Mar 7 01:28:48.735403 containerd[1739]: time="2026-03-07T01:28:48.735368413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vtc8,Uid:f1d81093-838f-42f2-bd2e-44a2be2ca4cf,Namespace:kube-system,Attempt:1,}" Mar 7 01:28:48.760472 systemd[1]: Started cri-containerd-8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705.scope - libcontainer container 8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705. Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.533 [INFO][4325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.534 [INFO][4325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" iface="eth0" netns="/var/run/netns/cni-cb8d4373-7f65-0e24-e612-bf97e8d01c37" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.534 [INFO][4325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" iface="eth0" netns="/var/run/netns/cni-cb8d4373-7f65-0e24-e612-bf97e8d01c37" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.534 [INFO][4325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" iface="eth0" netns="/var/run/netns/cni-cb8d4373-7f65-0e24-e612-bf97e8d01c37" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.535 [INFO][4325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.536 [INFO][4325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.713 [INFO][4395] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.714 [INFO][4395] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.715 [INFO][4395] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.758 [WARNING][4395] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.758 [INFO][4395] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.777 [INFO][4395] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.804130 containerd[1739]: 2026-03-07 01:28:48.795 [INFO][4325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:28:48.806950 containerd[1739]: time="2026-03-07T01:28:48.806892053Z" level=info msg="TearDown network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" successfully" Mar 7 01:28:48.806950 containerd[1739]: time="2026-03-07T01:28:48.806942253Z" level=info msg="StopPodSandbox for \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" returns successfully" Mar 7 01:28:48.809251 containerd[1739]: time="2026-03-07T01:28:48.808301456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xfb4w,Uid:90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5,Namespace:calico-system,Attempt:1,}" Mar 7 01:28:48.809742 systemd[1]: run-netns-cni\x2dcb8d4373\x2d7f65\x2d0e24\x2de612\x2dbf97e8d01c37.mount: Deactivated successfully. Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" iface="eth0" netns="/var/run/netns/cni-f4c82da2-e4bd-8469-e8dc-8108afdc391e" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" iface="eth0" netns="/var/run/netns/cni-f4c82da2-e4bd-8469-e8dc-8108afdc391e" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" iface="eth0" netns="/var/run/netns/cni-f4c82da2-e4bd-8469-e8dc-8108afdc391e" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.681 [INFO][4320] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.777 [INFO][4438] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.779 [INFO][4438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.779 [INFO][4438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.815 [WARNING][4438] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.815 [INFO][4438] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.818 [INFO][4438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.833944 containerd[1739]: 2026-03-07 01:28:48.823 [INFO][4320] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:28:48.837800 containerd[1739]: time="2026-03-07T01:28:48.837670562Z" level=info msg="TearDown network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" successfully" Mar 7 01:28:48.837800 containerd[1739]: time="2026-03-07T01:28:48.837703642Z" level=info msg="StopPodSandbox for \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" returns successfully" Mar 7 01:28:48.840410 systemd[1]: run-netns-cni\x2df4c82da2\x2de4bd\x2d8469\x2de8dc\x2d8108afdc391e.mount: Deactivated successfully. Mar 7 01:28:48.841318 containerd[1739]: time="2026-03-07T01:28:48.840485968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66vcb,Uid:d9ccc5b3-674c-44b0-9a28-c6c06c5b61cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705\"" Mar 7 01:28:48.850273 containerd[1739]: time="2026-03-07T01:28:48.850131989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.499 [INFO][4294] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.499 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" iface="eth0" netns="/var/run/netns/cni-855bb937-20c5-bd23-0f33-3123c5b6a674" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.499 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" iface="eth0" netns="/var/run/netns/cni-855bb937-20c5-bd23-0f33-3123c5b6a674" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.501 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" iface="eth0" netns="/var/run/netns/cni-855bb937-20c5-bd23-0f33-3123c5b6a674" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.501 [INFO][4294] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.501 [INFO][4294] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.789 [INFO][4388] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.790 [INFO][4388] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.818 [INFO][4388] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.834 [WARNING][4388] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.837 [INFO][4388] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.842 [INFO][4388] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.854889 containerd[1739]: 2026-03-07 01:28:48.849 [INFO][4294] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:28:48.856119 containerd[1739]: time="2026-03-07T01:28:48.854905800Z" level=info msg="TearDown network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" successfully" Mar 7 01:28:48.856119 containerd[1739]: time="2026-03-07T01:28:48.854928920Z" level=info msg="StopPodSandbox for \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" returns successfully" Mar 7 01:28:48.856119 containerd[1739]: time="2026-03-07T01:28:48.855941802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-mchvs,Uid:73e5dc10-ec6b-44b7-a83a-ebda472da3b6,Namespace:calico-system,Attempt:1,}" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.594 [INFO][4293] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.594 [INFO][4293] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" iface="eth0" netns="/var/run/netns/cni-27ef572c-c6ad-4d88-98a2-f09b5b5eb63e" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.595 [INFO][4293] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" iface="eth0" netns="/var/run/netns/cni-27ef572c-c6ad-4d88-98a2-f09b5b5eb63e" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.599 [INFO][4293] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" iface="eth0" netns="/var/run/netns/cni-27ef572c-c6ad-4d88-98a2-f09b5b5eb63e" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.599 [INFO][4293] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.599 [INFO][4293] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.792 [INFO][4409] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.794 [INFO][4409] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.842 [INFO][4409] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.861 [WARNING][4409] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.861 [INFO][4409] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.864 [INFO][4409] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.879015 containerd[1739]: 2026-03-07 01:28:48.876 [INFO][4293] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:28:48.882842 containerd[1739]: time="2026-03-07T01:28:48.879794536Z" level=info msg="TearDown network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" successfully" Mar 7 01:28:48.882842 containerd[1739]: time="2026-03-07T01:28:48.879823976Z" level=info msg="StopPodSandbox for \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" returns successfully" Mar 7 01:28:48.882842 containerd[1739]: time="2026-03-07T01:28:48.880679818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmk8x,Uid:071a7194-ad19-4e52-9195-4caf3f158140,Namespace:kube-system,Attempt:1,}" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.648 [INFO][4314] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.648 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" iface="eth0" netns="/var/run/netns/cni-66bc10d0-4e0d-0b3a-d09f-2f01685e35ab" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.648 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" iface="eth0" netns="/var/run/netns/cni-66bc10d0-4e0d-0b3a-d09f-2f01685e35ab" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.651 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" iface="eth0" netns="/var/run/netns/cni-66bc10d0-4e0d-0b3a-d09f-2f01685e35ab" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.651 [INFO][4314] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.651 [INFO][4314] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.798 [INFO][4422] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.798 [INFO][4422] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.865 [INFO][4422] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.883 [WARNING][4422] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.883 [INFO][4422] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.886 [INFO][4422] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.920531 containerd[1739]: 2026-03-07 01:28:48.891 [INFO][4314] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.640 [INFO][4342] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.640 [INFO][4342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" iface="eth0" netns="/var/run/netns/cni-66d8fff2-99b2-43f3-b586-3fa545334702" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.640 [INFO][4342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" iface="eth0" netns="/var/run/netns/cni-66d8fff2-99b2-43f3-b586-3fa545334702" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.641 [INFO][4342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" iface="eth0" netns="/var/run/netns/cni-66d8fff2-99b2-43f3-b586-3fa545334702" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.641 [INFO][4342] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.643 [INFO][4342] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.820 [INFO][4420] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.821 [INFO][4420] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.886 [INFO][4420] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.905 [WARNING][4420] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.905 [INFO][4420] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.907 [INFO][4420] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:48.922879 containerd[1739]: 2026-03-07 01:28:48.917 [INFO][4342] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:28:48.936958 containerd[1739]: time="2026-03-07T01:28:48.936834423Z" level=info msg="TearDown network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" successfully" Mar 7 01:28:48.936958 containerd[1739]: time="2026-03-07T01:28:48.936874943Z" level=info msg="StopPodSandbox for \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" returns successfully" Mar 7 01:28:48.936958 containerd[1739]: time="2026-03-07T01:28:48.937616985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-szzxj,Uid:e2a5fc71-1b77-4f5e-a88c-b160d32eae5f,Namespace:calico-system,Attempt:1,}" Mar 7 01:28:48.940188 kubelet[3162]: I0307 01:28:48.940153 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-backend-key-pair\") pod \"69d1a88c-b6a1-4699-84d0-d686e59af986\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " Mar 7 01:28:48.940347 kubelet[3162]: I0307 01:28:48.940201 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkrx2\" (UniqueName: \"kubernetes.io/projected/69d1a88c-b6a1-4699-84d0-d686e59af986-kube-api-access-qkrx2\") pod \"69d1a88c-b6a1-4699-84d0-d686e59af986\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " Mar 7 01:28:48.940347 kubelet[3162]: I0307 01:28:48.940235 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-nginx-config\") pod \"69d1a88c-b6a1-4699-84d0-d686e59af986\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " Mar 7 01:28:48.940347 kubelet[3162]: I0307 01:28:48.940255 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-ca-bundle\") pod \"69d1a88c-b6a1-4699-84d0-d686e59af986\" (UID: \"69d1a88c-b6a1-4699-84d0-d686e59af986\") " Mar 7 01:28:48.943070 kubelet[3162]: I0307 01:28:48.940623 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "69d1a88c-b6a1-4699-84d0-d686e59af986" (UID: "69d1a88c-b6a1-4699-84d0-d686e59af986"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:28:48.943872 kubelet[3162]: I0307 01:28:48.943650 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "69d1a88c-b6a1-4699-84d0-d686e59af986" (UID: "69d1a88c-b6a1-4699-84d0-d686e59af986"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:28:48.965554 kubelet[3162]: I0307 01:28:48.965511 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "69d1a88c-b6a1-4699-84d0-d686e59af986" (UID: "69d1a88c-b6a1-4699-84d0-d686e59af986"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:28:48.966959 kubelet[3162]: I0307 01:28:48.966921 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d1a88c-b6a1-4699-84d0-d686e59af986-kube-api-access-qkrx2" (OuterVolumeSpecName: "kube-api-access-qkrx2") pod "69d1a88c-b6a1-4699-84d0-d686e59af986" (UID: "69d1a88c-b6a1-4699-84d0-d686e59af986"). InnerVolumeSpecName "kube-api-access-qkrx2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:28:48.978557 containerd[1739]: time="2026-03-07T01:28:48.977486594Z" level=info msg="TearDown network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" successfully" Mar 7 01:28:48.978557 containerd[1739]: time="2026-03-07T01:28:48.977528914Z" level=info msg="StopPodSandbox for \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" returns successfully" Mar 7 01:28:48.981081 containerd[1739]: time="2026-03-07T01:28:48.981040042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4f64765f-kwd8x,Uid:b138cb7e-09ad-4994-8633-ecd968afa99c,Namespace:calico-system,Attempt:1,}" Mar 7 01:28:49.041707 kubelet[3162]: I0307 01:28:49.041623 3162 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-24b0a814a4\" DevicePath \"\"" Mar 7 01:28:49.041707 kubelet[3162]: I0307 01:28:49.041659 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qkrx2\" (UniqueName: \"kubernetes.io/projected/69d1a88c-b6a1-4699-84d0-d686e59af986-kube-api-access-qkrx2\") on node \"ci-4081.3.6-n-24b0a814a4\" DevicePath \"\"" Mar 7 01:28:49.041707 kubelet[3162]: I0307 01:28:49.041673 3162 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-nginx-config\") on node \"ci-4081.3.6-n-24b0a814a4\" DevicePath \"\"" Mar 7 01:28:49.041707 kubelet[3162]: I0307 01:28:49.041690 3162 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69d1a88c-b6a1-4699-84d0-d686e59af986-whisker-ca-bundle\") on node \"ci-4081.3.6-n-24b0a814a4\" DevicePath \"\"" Mar 7 01:28:49.296455 systemd[1]: Removed slice kubepods-besteffort-pod69d1a88c_b6a1_4699_84d0_d686e59af986.slice - libcontainer container kubepods-besteffort-pod69d1a88c_b6a1_4699_84d0_d686e59af986.slice. Mar 7 01:28:49.363537 systemd-networkd[1356]: cali11fbbe5af9b: Link UP Mar 7 01:28:49.365261 systemd-networkd[1356]: cali11fbbe5af9b: Gained carrier Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:48.980 [ERROR][4507] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.006 [INFO][4507] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0 coredns-674b8bbfcf- kube-system f1d81093-838f-42f2-bd2e-44a2be2ca4cf 892 0 2026-03-07 01:28:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 coredns-674b8bbfcf-2vtc8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11fbbe5af9b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.008 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.144 [INFO][4555] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" HandleID="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.176 [INFO][4555] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" HandleID="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e41f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"coredns-674b8bbfcf-2vtc8", "timestamp":"2026-03-07 01:28:49.144493086 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000312f20)} Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.176 [INFO][4555] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.176 [INFO][4555] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.176 [INFO][4555] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.185 [INFO][4555] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.212 [INFO][4555] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.233 [INFO][4555] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.240 [INFO][4555] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.249 [INFO][4555] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.249 [INFO][4555] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.253 [INFO][4555] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65 Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.267 [INFO][4555] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.289 [INFO][4555] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.194/26] block=192.168.34.192/26 handle="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.291 [INFO][4555] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.194/26] handle="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.294 [INFO][4555] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:49.459438 containerd[1739]: 2026-03-07 01:28:49.303 [INFO][4555] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.194/26] IPv6=[] ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" HandleID="k8s-pod-network.1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.325 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f1d81093-838f-42f2-bd2e-44a2be2ca4cf", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"coredns-674b8bbfcf-2vtc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11fbbe5af9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.326 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.194/32] ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.326 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11fbbe5af9b ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.366 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.374 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f1d81093-838f-42f2-bd2e-44a2be2ca4cf", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65", Pod:"coredns-674b8bbfcf-2vtc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11fbbe5af9b", MAC:"22:8a:59:ef:10:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.461718 containerd[1739]: 2026-03-07 01:28:49.442 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vtc8" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:28:49.513673 systemd[1]: Created slice kubepods-besteffort-podb7225c86_cb44_48b4_940d_bbbef231eadf.slice - libcontainer container kubepods-besteffort-podb7225c86_cb44_48b4_940d_bbbef231eadf.slice. Mar 7 01:28:49.523051 systemd-networkd[1356]: caliedfcfd3a75e: Link UP Mar 7 01:28:49.525259 systemd-networkd[1356]: caliedfcfd3a75e: Gained carrier Mar 7 01:28:49.547401 kubelet[3162]: I0307 01:28:49.547287 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b7225c86-cb44-48b4-940d-bbbef231eadf-nginx-config\") pod \"whisker-656fbb6f58-fhhm2\" (UID: \"b7225c86-cb44-48b4-940d-bbbef231eadf\") " pod="calico-system/whisker-656fbb6f58-fhhm2" Mar 7 01:28:49.547401 kubelet[3162]: I0307 01:28:49.547335 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7225c86-cb44-48b4-940d-bbbef231eadf-whisker-backend-key-pair\") pod \"whisker-656fbb6f58-fhhm2\" (UID: \"b7225c86-cb44-48b4-940d-bbbef231eadf\") " pod="calico-system/whisker-656fbb6f58-fhhm2" Mar 7 01:28:49.547401 kubelet[3162]: I0307 01:28:49.547359 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tll\" (UniqueName: \"kubernetes.io/projected/b7225c86-cb44-48b4-940d-bbbef231eadf-kube-api-access-45tll\") pod \"whisker-656fbb6f58-fhhm2\" (UID: \"b7225c86-cb44-48b4-940d-bbbef231eadf\") " pod="calico-system/whisker-656fbb6f58-fhhm2" Mar 7 01:28:49.547401 kubelet[3162]: I0307 01:28:49.547379 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7225c86-cb44-48b4-940d-bbbef231eadf-whisker-ca-bundle\") pod \"whisker-656fbb6f58-fhhm2\" (UID: \"b7225c86-cb44-48b4-940d-bbbef231eadf\") " pod="calico-system/whisker-656fbb6f58-fhhm2" Mar 7 01:28:49.553618 containerd[1739]: time="2026-03-07T01:28:49.552963758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:49.553618 containerd[1739]: time="2026-03-07T01:28:49.553037438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:49.553618 containerd[1739]: time="2026-03-07T01:28:49.553052318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.553618 containerd[1739]: time="2026-03-07T01:28:49.553136158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:48.999 [ERROR][4525] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.027 [INFO][4525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0 goldmane-5b85766d88- calico-system 90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5 894 0 2026-03-07 01:28:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 goldmane-5b85766d88-xfb4w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliedfcfd3a75e [] [] }} ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.028 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.179 [INFO][4588] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" HandleID="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.211 [INFO][4588] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" HandleID="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bc010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"goldmane-5b85766d88-xfb4w", "timestamp":"2026-03-07 01:28:49.179217204 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c9760)} Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.211 [INFO][4588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.300 [INFO][4588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.301 [INFO][4588] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.317 [INFO][4588] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.334 [INFO][4588] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.346 [INFO][4588] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.367 [INFO][4588] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.373 [INFO][4588] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.374 [INFO][4588] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.391 [INFO][4588] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83 Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.425 [INFO][4588] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.484 [INFO][4588] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.195/26] block=192.168.34.192/26 handle="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.487 [INFO][4588] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.195/26] handle="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.487 [INFO][4588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:49.585479 containerd[1739]: 2026-03-07 01:28:49.487 [INFO][4588] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.195/26] IPv6=[] ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" HandleID="k8s-pod-network.2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.500 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"goldmane-5b85766d88-xfb4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedfcfd3a75e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.500 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.195/32] ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.500 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedfcfd3a75e ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.527 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.527 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83", Pod:"goldmane-5b85766d88-xfb4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedfcfd3a75e", MAC:"4e:37:07:0b:0a:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.586147 containerd[1739]: 2026-03-07 01:28:49.577 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83" Namespace="calico-system" Pod="goldmane-5b85766d88-xfb4w" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:28:49.620611 containerd[1739]: time="2026-03-07T01:28:49.620300468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:49.620611 containerd[1739]: time="2026-03-07T01:28:49.620370868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:49.620611 containerd[1739]: time="2026-03-07T01:28:49.620386428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.620611 containerd[1739]: time="2026-03-07T01:28:49.620490429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.655102 systemd-networkd[1356]: cali5ca627c65b4: Gained IPv6LL Mar 7 01:28:49.676337 systemd-networkd[1356]: cali1c05fd67a80: Link UP Mar 7 01:28:49.676978 systemd-networkd[1356]: cali1c05fd67a80: Gained carrier Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.178 [ERROR][4598] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.223 [INFO][4598] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0 calico-apiserver-765cff7995- calico-system e2a5fc71-1b77-4f5e-a88c-b160d32eae5f 898 0 2026-03-07 01:28:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765cff7995 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 calico-apiserver-765cff7995-szzxj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1c05fd67a80 [] [] }} ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.223 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.370 [INFO][4662] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" HandleID="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.441 [INFO][4662] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" HandleID="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"calico-apiserver-765cff7995-szzxj", "timestamp":"2026-03-07 01:28:49.37018759 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400047e580)} Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.441 [INFO][4662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.495 [INFO][4662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.495 [INFO][4662] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.520 [INFO][4662] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.566 [INFO][4662] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.596 [INFO][4662] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.601 [INFO][4662] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.606 [INFO][4662] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.606 [INFO][4662] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.609 [INFO][4662] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.628 [INFO][4662] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4662] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.196/26] block=192.168.34.192/26 handle="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4662] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.196/26] handle="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:49.705519 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4662] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.196/26] IPv6=[] ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" HandleID="k8s-pod-network.053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.665 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"calico-apiserver-765cff7995-szzxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c05fd67a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.667 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.196/32] ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.667 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c05fd67a80 ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.678 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.679 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c", Pod:"calico-apiserver-765cff7995-szzxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c05fd67a80", MAC:"4a:4d:7f:c1:45:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.706179 containerd[1739]: 2026-03-07 01:28:49.702 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c" Namespace="calico-system" Pod="calico-apiserver-765cff7995-szzxj" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:28:49.764635 containerd[1739]: time="2026-03-07T01:28:49.764189749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:49.764635 containerd[1739]: time="2026-03-07T01:28:49.764255790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:49.764635 containerd[1739]: time="2026-03-07T01:28:49.764274710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.764635 containerd[1739]: time="2026-03-07T01:28:49.764366110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:49.777037 systemd[1]: Started cri-containerd-1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65.scope - libcontainer container 1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65. Mar 7 01:28:49.795138 systemd[1]: run-netns-cni\x2d855bb937\x2d20c5\x2dbd23\x2d0f33\x2d3123c5b6a674.mount: Deactivated successfully. Mar 7 01:28:49.795227 systemd[1]: run-netns-cni\x2d66bc10d0\x2d4e0d\x2d0b3a\x2dd09f\x2d2f01685e35ab.mount: Deactivated successfully. Mar 7 01:28:49.795274 systemd[1]: run-netns-cni\x2d27ef572c\x2dc6ad\x2d4d88\x2d98a2\x2df09b5b5eb63e.mount: Deactivated successfully. Mar 7 01:28:49.795319 systemd[1]: run-netns-cni\x2d66d8fff2\x2d99b2\x2d43f3\x2db586\x2d3fa545334702.mount: Deactivated successfully. Mar 7 01:28:49.795363 systemd[1]: var-lib-kubelet-pods-69d1a88c\x2db6a1\x2d4699\x2d84d0\x2dd686e59af986-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkrx2.mount: Deactivated successfully. Mar 7 01:28:49.795413 systemd[1]: var-lib-kubelet-pods-69d1a88c\x2db6a1\x2d4699\x2d84d0\x2dd686e59af986-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:28:49.816234 systemd-networkd[1356]: cali6dc28c71b40: Link UP Mar 7 01:28:49.819568 systemd-networkd[1356]: cali6dc28c71b40: Gained carrier Mar 7 01:28:49.820239 systemd[1]: Started cri-containerd-053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c.scope - libcontainer container 053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c. Mar 7 01:28:49.821552 systemd[1]: Started cri-containerd-2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83.scope - libcontainer container 2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83. Mar 7 01:28:49.825335 containerd[1739]: time="2026-03-07T01:28:49.825292406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656fbb6f58-fhhm2,Uid:b7225c86-cb44-48b4-940d-bbbef231eadf,Namespace:calico-system,Attempt:0,}" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.126 [ERROR][4570] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.187 [INFO][4570] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0 coredns-674b8bbfcf- kube-system 071a7194-ad19-4e52-9195-4caf3f158140 895 0 2026-03-07 01:28:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 coredns-674b8bbfcf-cmk8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6dc28c71b40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.189 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.424 [INFO][4655] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" HandleID="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.502 [INFO][4655] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" HandleID="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9c40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"coredns-674b8bbfcf-cmk8x", "timestamp":"2026-03-07 01:28:49.424297991 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.502 [INFO][4655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.659 [INFO][4655] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.670 [INFO][4655] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.692 [INFO][4655] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.710 [INFO][4655] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.713 [INFO][4655] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.716 [INFO][4655] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.716 [INFO][4655] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.736 [INFO][4655] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811 Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.760 [INFO][4655] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.782 [INFO][4655] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.197/26] block=192.168.34.192/26 handle="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.782 [INFO][4655] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.197/26] handle="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.782 [INFO][4655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:49.863090 containerd[1739]: 2026-03-07 01:28:49.784 [INFO][4655] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.197/26] IPv6=[] ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" HandleID="k8s-pod-network.55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.796 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071a7194-ad19-4e52-9195-4caf3f158140", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"coredns-674b8bbfcf-cmk8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dc28c71b40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.796 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.197/32] ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.796 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dc28c71b40 ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.819 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.829 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071a7194-ad19-4e52-9195-4caf3f158140", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811", Pod:"coredns-674b8bbfcf-cmk8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dc28c71b40", MAC:"e2:fe:70:da:dd:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.863658 containerd[1739]: 2026-03-07 01:28:49.859 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811" Namespace="kube-system" Pod="coredns-674b8bbfcf-cmk8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:28:49.933176 systemd-networkd[1356]: cali61ed460240d: Link UP Mar 7 01:28:49.934255 systemd-networkd[1356]: cali61ed460240d: Gained carrier Mar 7 01:28:49.945321 containerd[1739]: time="2026-03-07T01:28:49.945271194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vtc8,Uid:f1d81093-838f-42f2-bd2e-44a2be2ca4cf,Namespace:kube-system,Attempt:1,} returns sandbox id \"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65\"" Mar 7 01:28:49.961196 containerd[1739]: time="2026-03-07T01:28:49.961151749Z" level=info msg="CreateContainer within sandbox \"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.128 [ERROR][4560] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.219 [INFO][4560] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0 calico-apiserver-765cff7995- calico-system 73e5dc10-ec6b-44b7-a83a-ebda472da3b6 893 0 2026-03-07 01:28:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:765cff7995 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 calico-apiserver-765cff7995-mchvs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali61ed460240d [] [] }} ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.219 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.471 [INFO][4660] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" HandleID="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.519 [INFO][4660] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" HandleID="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000405570), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"calico-apiserver-765cff7995-mchvs", "timestamp":"2026-03-07 01:28:49.471480816 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005449a0)} Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.519 [INFO][4660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.783 [INFO][4660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.787 [INFO][4660] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.794 [INFO][4660] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.803 [INFO][4660] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.837 [INFO][4660] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.843 [INFO][4660] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.848 [INFO][4660] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.848 [INFO][4660] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.857 [INFO][4660] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259 Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.867 [INFO][4660] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.884 [INFO][4660] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.198/26] block=192.168.34.192/26 handle="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.884 [INFO][4660] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.198/26] handle="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.884 [INFO][4660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:49.993108 containerd[1739]: 2026-03-07 01:28:49.884 [INFO][4660] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.198/26] IPv6=[] ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" HandleID="k8s-pod-network.aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.915 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"73e5dc10-ec6b-44b7-a83a-ebda472da3b6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"calico-apiserver-765cff7995-mchvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61ed460240d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.917 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.198/32] ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.918 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61ed460240d ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.927 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.936 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"73e5dc10-ec6b-44b7-a83a-ebda472da3b6", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259", Pod:"calico-apiserver-765cff7995-mchvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61ed460240d", MAC:"f6:cf:4a:7c:5b:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:49.994545 containerd[1739]: 2026-03-07 01:28:49.977 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259" Namespace="calico-system" Pod="calico-apiserver-765cff7995-mchvs" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:28:50.007705 containerd[1739]: time="2026-03-07T01:28:50.007445412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:50.007705 containerd[1739]: time="2026-03-07T01:28:50.007516132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:50.007705 containerd[1739]: time="2026-03-07T01:28:50.007530933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.007705 containerd[1739]: time="2026-03-07T01:28:50.007624613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.052816 systemd-networkd[1356]: caliacc68ccd682: Link UP Mar 7 01:28:50.058738 systemd-networkd[1356]: caliacc68ccd682: Gained carrier Mar 7 01:28:50.080568 containerd[1739]: time="2026-03-07T01:28:50.080360255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-szzxj,Uid:e2a5fc71-1b77-4f5e-a88c-b160d32eae5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c\"" Mar 7 01:28:50.081732 containerd[1739]: time="2026-03-07T01:28:50.081456538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xfb4w,Uid:90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83\"" Mar 7 01:28:50.086311 containerd[1739]: time="2026-03-07T01:28:50.086278228Z" level=info msg="CreateContainer within sandbox \"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65504eba20273dfca3cacd5111f878c808e08211100bab13c3362b2fb247ff8d\"" Mar 7 01:28:50.086737 containerd[1739]: time="2026-03-07T01:28:50.086699149Z" level=info msg="StartContainer for \"65504eba20273dfca3cacd5111f878c808e08211100bab13c3362b2fb247ff8d\"" Mar 7 01:28:50.099843 kubelet[3162]: I0307 01:28:50.099804 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d1a88c-b6a1-4699-84d0-d686e59af986" path="/var/lib/kubelet/pods/69d1a88c-b6a1-4699-84d0-d686e59af986/volumes" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.208 [ERROR][4626] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.266 [INFO][4626] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0 calico-kube-controllers-7b4f64765f- calico-system b138cb7e-09ad-4994-8633-ecd968afa99c 897 0 2026-03-07 01:28:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b4f64765f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 calico-kube-controllers-7b4f64765f-kwd8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliacc68ccd682 [] [] }} ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.266 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.509 [INFO][4671] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" HandleID="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.564 [INFO][4671] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" HandleID="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"calico-kube-controllers-7b4f64765f-kwd8x", "timestamp":"2026-03-07 01:28:49.50914726 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400053b080)} Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.564 [INFO][4671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.884 [INFO][4671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.885 [INFO][4671] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.895 [INFO][4671] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.910 [INFO][4671] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.935 [INFO][4671] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.947 [INFO][4671] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.958 [INFO][4671] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.959 [INFO][4671] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.964 [INFO][4671] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.981 [INFO][4671] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.997 [INFO][4671] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.199/26] block=192.168.34.192/26 handle="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.999 [INFO][4671] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.199/26] handle="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.999 [INFO][4671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:50.109217 containerd[1739]: 2026-03-07 01:28:49.999 [INFO][4671] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.199/26] IPv6=[] ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" HandleID="k8s-pod-network.41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.023 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0", GenerateName:"calico-kube-controllers-7b4f64765f-", Namespace:"calico-system", SelfLink:"", UID:"b138cb7e-09ad-4994-8633-ecd968afa99c", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4f64765f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"calico-kube-controllers-7b4f64765f-kwd8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacc68ccd682", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.023 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.199/32] ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.023 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacc68ccd682 ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.064 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.069 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0", GenerateName:"calico-kube-controllers-7b4f64765f-", Namespace:"calico-system", SelfLink:"", UID:"b138cb7e-09ad-4994-8633-ecd968afa99c", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4f64765f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b", Pod:"calico-kube-controllers-7b4f64765f-kwd8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacc68ccd682", MAC:"ae:e3:78:58:1d:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:50.111606 containerd[1739]: 2026-03-07 01:28:50.097 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b" Namespace="calico-system" Pod="calico-kube-controllers-7b4f64765f-kwd8x" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:28:50.130003 containerd[1739]: time="2026-03-07T01:28:50.129756925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:50.130003 containerd[1739]: time="2026-03-07T01:28:50.129829005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:50.130003 containerd[1739]: time="2026-03-07T01:28:50.129846246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.131421 containerd[1739]: time="2026-03-07T01:28:50.129951966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.169460 systemd[1]: Started cri-containerd-55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811.scope - libcontainer container 55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811. Mar 7 01:28:50.179096 systemd[1]: Started cri-containerd-65504eba20273dfca3cacd5111f878c808e08211100bab13c3362b2fb247ff8d.scope - libcontainer container 65504eba20273dfca3cacd5111f878c808e08211100bab13c3362b2fb247ff8d. Mar 7 01:28:50.180742 systemd[1]: Started cri-containerd-aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259.scope - libcontainer container aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259. Mar 7 01:28:50.217142 containerd[1739]: time="2026-03-07T01:28:50.217042760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:50.217142 containerd[1739]: time="2026-03-07T01:28:50.217103880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:50.217142 containerd[1739]: time="2026-03-07T01:28:50.217123320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.217346 containerd[1739]: time="2026-03-07T01:28:50.217194320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.264262 systemd[1]: Started cri-containerd-41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b.scope - libcontainer container 41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b. Mar 7 01:28:50.293190 containerd[1739]: time="2026-03-07T01:28:50.292686009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cmk8x,Uid:071a7194-ad19-4e52-9195-4caf3f158140,Namespace:kube-system,Attempt:1,} returns sandbox id \"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811\"" Mar 7 01:28:50.309022 containerd[1739]: time="2026-03-07T01:28:50.308489524Z" level=info msg="CreateContainer within sandbox \"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:28:50.337877 containerd[1739]: time="2026-03-07T01:28:50.337625629Z" level=info msg="StartContainer for \"65504eba20273dfca3cacd5111f878c808e08211100bab13c3362b2fb247ff8d\" returns successfully" Mar 7 01:28:50.391489 systemd-networkd[1356]: calia4a64a2dbd7: Link UP Mar 7 01:28:50.393802 systemd-networkd[1356]: calia4a64a2dbd7: Gained carrier Mar 7 01:28:50.429210 containerd[1739]: time="2026-03-07T01:28:50.428846273Z" level=info msg="CreateContainer within sandbox \"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73d66e8e8ddb469c5de745033919a35b407e189bf0bc3a82a0199e98e0bd56c5\"" Mar 7 01:28:50.430306 containerd[1739]: time="2026-03-07T01:28:50.429658795Z" level=info msg="StartContainer for \"73d66e8e8ddb469c5de745033919a35b407e189bf0bc3a82a0199e98e0bd56c5\"" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.111 [ERROR][4844] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.183 [INFO][4844] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0 whisker-656fbb6f58- calico-system b7225c86-cb44-48b4-940d-bbbef231eadf 921 0 2026-03-07 01:28:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:656fbb6f58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-24b0a814a4 whisker-656fbb6f58-fhhm2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4a64a2dbd7 [] [] }} ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.184 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.309 [INFO][4981] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" HandleID="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.327 [INFO][4981] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" HandleID="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000257910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-24b0a814a4", "pod":"whisker-656fbb6f58-fhhm2", "timestamp":"2026-03-07 01:28:50.309539367 +0000 UTC"}, Hostname:"ci-4081.3.6-n-24b0a814a4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003d1080)} Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.327 [INFO][4981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.328 [INFO][4981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.328 [INFO][4981] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-24b0a814a4' Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.330 [INFO][4981] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.342 [INFO][4981] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.349 [INFO][4981] ipam/ipam.go 526: Trying affinity for 192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.353 [INFO][4981] ipam/ipam.go 160: Attempting to load block cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.359 [INFO][4981] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.34.192/26 host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.359 [INFO][4981] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.34.192/26 handle="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.362 [INFO][4981] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476 Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.368 [INFO][4981] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.34.192/26 handle="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.381 [INFO][4981] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.34.200/26] block=192.168.34.192/26 handle="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.382 [INFO][4981] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.34.200/26] handle="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" host="ci-4081.3.6-n-24b0a814a4" Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.382 [INFO][4981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:28:50.442545 containerd[1739]: 2026-03-07 01:28:50.382 [INFO][4981] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.34.200/26] IPv6=[] ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" HandleID="k8s-pod-network.df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.386 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0", GenerateName:"whisker-656fbb6f58-", Namespace:"calico-system", SelfLink:"", UID:"b7225c86-cb44-48b4-940d-bbbef231eadf", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656fbb6f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"", Pod:"whisker-656fbb6f58-fhhm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4a64a2dbd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.386 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.200/32] ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.386 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a64a2dbd7 ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.395 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.396 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0", GenerateName:"whisker-656fbb6f58-", Namespace:"calico-system", SelfLink:"", UID:"b7225c86-cb44-48b4-940d-bbbef231eadf", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656fbb6f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476", Pod:"whisker-656fbb6f58-fhhm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4a64a2dbd7", MAC:"3a:54:12:eb:7e:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:28:50.443212 containerd[1739]: 2026-03-07 01:28:50.414 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476" Namespace="calico-system" Pod="whisker-656fbb6f58-fhhm2" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--656fbb6f58--fhhm2-eth0" Mar 7 01:28:50.464356 containerd[1739]: time="2026-03-07T01:28:50.462539788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-765cff7995-mchvs,Uid:73e5dc10-ec6b-44b7-a83a-ebda472da3b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259\"" Mar 7 01:28:50.476897 containerd[1739]: time="2026-03-07T01:28:50.476855180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4f64765f-kwd8x,Uid:b138cb7e-09ad-4994-8633-ecd968afa99c,Namespace:calico-system,Attempt:1,} returns sandbox id \"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b\"" Mar 7 01:28:50.512904 containerd[1739]: time="2026-03-07T01:28:50.512753060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:28:50.512904 containerd[1739]: time="2026-03-07T01:28:50.512822180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:28:50.512904 containerd[1739]: time="2026-03-07T01:28:50.512837740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.513563 containerd[1739]: time="2026-03-07T01:28:50.512922900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:28:50.513848 systemd[1]: Started cri-containerd-73d66e8e8ddb469c5de745033919a35b407e189bf0bc3a82a0199e98e0bd56c5.scope - libcontainer container 73d66e8e8ddb469c5de745033919a35b407e189bf0bc3a82a0199e98e0bd56c5. Mar 7 01:28:50.537274 systemd[1]: Started cri-containerd-df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476.scope - libcontainer container df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476. Mar 7 01:28:50.572165 containerd[1739]: time="2026-03-07T01:28:50.571784072Z" level=info msg="StartContainer for \"73d66e8e8ddb469c5de745033919a35b407e189bf0bc3a82a0199e98e0bd56c5\" returns successfully" Mar 7 01:28:50.604009 containerd[1739]: time="2026-03-07T01:28:50.603512343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656fbb6f58-fhhm2,Uid:b7225c86-cb44-48b4-940d-bbbef231eadf,Namespace:calico-system,Attempt:0,} returns sandbox id \"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476\"" Mar 7 01:28:50.630060 kernel: calico-node[5127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:28:50.678218 systemd-networkd[1356]: cali11fbbe5af9b: Gained IPv6LL Mar 7 01:28:50.742202 systemd-networkd[1356]: caliedfcfd3a75e: Gained IPv6LL Mar 7 01:28:51.074848 systemd-networkd[1356]: vxlan.calico: Link UP Mar 7 01:28:51.074856 systemd-networkd[1356]: vxlan.calico: Gained carrier Mar 7 01:28:51.126138 systemd-networkd[1356]: caliacc68ccd682: Gained IPv6LL Mar 7 01:28:51.254207 systemd-networkd[1356]: cali1c05fd67a80: Gained IPv6LL Mar 7 01:28:51.399583 kubelet[3162]: I0307 01:28:51.399523 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2vtc8" podStartSLOduration=37.399507519 podStartE2EDuration="37.399507519s" podCreationTimestamp="2026-03-07 01:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:51.380671037 +0000 UTC m=+43.395831623" watchObservedRunningTime="2026-03-07 01:28:51.399507519 +0000 UTC m=+43.414668105" Mar 7 01:28:51.421462 kubelet[3162]: I0307 01:28:51.421044 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cmk8x" podStartSLOduration=37.421024487 podStartE2EDuration="37.421024487s" podCreationTimestamp="2026-03-07 01:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:28:51.401581284 +0000 UTC m=+43.416741990" watchObservedRunningTime="2026-03-07 01:28:51.421024487 +0000 UTC m=+43.436185073" Mar 7 01:28:51.510203 systemd-networkd[1356]: calia4a64a2dbd7: Gained IPv6LL Mar 7 01:28:51.510476 systemd-networkd[1356]: cali61ed460240d: Gained IPv6LL Mar 7 01:28:51.830397 systemd-networkd[1356]: cali6dc28c71b40: Gained IPv6LL Mar 7 01:28:51.885023 containerd[1739]: time="2026-03-07T01:28:51.884241161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:51.888093 containerd[1739]: time="2026-03-07T01:28:51.887872169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 01:28:51.891713 containerd[1739]: time="2026-03-07T01:28:51.891663658Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:51.896506 containerd[1739]: time="2026-03-07T01:28:51.896455108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:51.897386 containerd[1739]: time="2026-03-07T01:28:51.897272510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.047097521s" Mar 7 01:28:51.897386 containerd[1739]: time="2026-03-07T01:28:51.897304470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 01:28:51.899100 containerd[1739]: time="2026-03-07T01:28:51.898316792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:28:51.904911 containerd[1739]: time="2026-03-07T01:28:51.904876727Z" level=info msg="CreateContainer within sandbox \"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:28:51.953182 containerd[1739]: time="2026-03-07T01:28:51.953138035Z" level=info msg="CreateContainer within sandbox \"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d738abfbb0e9004224f5486835c3d825de69df1a60d81c889a52a751c6dbd59\"" Mar 7 01:28:51.955092 containerd[1739]: time="2026-03-07T01:28:51.953811796Z" level=info msg="StartContainer for \"9d738abfbb0e9004224f5486835c3d825de69df1a60d81c889a52a751c6dbd59\"" Mar 7 01:28:51.987165 systemd[1]: Started cri-containerd-9d738abfbb0e9004224f5486835c3d825de69df1a60d81c889a52a751c6dbd59.scope - libcontainer container 9d738abfbb0e9004224f5486835c3d825de69df1a60d81c889a52a751c6dbd59. Mar 7 01:28:52.018286 containerd[1739]: time="2026-03-07T01:28:52.018244900Z" level=info msg="StartContainer for \"9d738abfbb0e9004224f5486835c3d825de69df1a60d81c889a52a751c6dbd59\" returns successfully" Mar 7 01:28:52.278157 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Mar 7 01:28:53.781985 containerd[1739]: time="2026-03-07T01:28:53.781764476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:53.784921 containerd[1739]: time="2026-03-07T01:28:53.784879483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 01:28:53.789108 containerd[1739]: time="2026-03-07T01:28:53.788642931Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:53.795618 containerd[1739]: time="2026-03-07T01:28:53.795571507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:53.796390 containerd[1739]: time="2026-03-07T01:28:53.796355629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 1.898002356s" Mar 7 01:28:53.796457 containerd[1739]: time="2026-03-07T01:28:53.796391949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 01:28:53.797922 containerd[1739]: time="2026-03-07T01:28:53.797893752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:28:53.805508 containerd[1739]: time="2026-03-07T01:28:53.805322089Z" level=info msg="CreateContainer within sandbox \"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:28:53.843549 containerd[1739]: time="2026-03-07T01:28:53.843492894Z" level=info msg="CreateContainer within sandbox \"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c32828fb85dcd18b813adc7df811d855ac652c3b4228b38a6f1db08485659cf\"" Mar 7 01:28:53.845680 containerd[1739]: time="2026-03-07T01:28:53.844466176Z" level=info msg="StartContainer for \"8c32828fb85dcd18b813adc7df811d855ac652c3b4228b38a6f1db08485659cf\"" Mar 7 01:28:53.885169 systemd[1]: Started cri-containerd-8c32828fb85dcd18b813adc7df811d855ac652c3b4228b38a6f1db08485659cf.scope - libcontainer container 8c32828fb85dcd18b813adc7df811d855ac652c3b4228b38a6f1db08485659cf. Mar 7 01:28:53.920193 containerd[1739]: time="2026-03-07T01:28:53.920058505Z" level=info msg="StartContainer for \"8c32828fb85dcd18b813adc7df811d855ac652c3b4228b38a6f1db08485659cf\" returns successfully" Mar 7 01:28:56.027127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962816171.mount: Deactivated successfully. Mar 7 01:28:56.078754 kubelet[3162]: I0307 01:28:56.077321 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-765cff7995-szzxj" podStartSLOduration=25.36541559 podStartE2EDuration="29.077307834s" podCreationTimestamp="2026-03-07 01:28:27 +0000 UTC" firstStartedPulling="2026-03-07 01:28:50.085724987 +0000 UTC m=+42.100885573" lastFinishedPulling="2026-03-07 01:28:53.797617231 +0000 UTC m=+45.812777817" observedRunningTime="2026-03-07 01:28:54.403217503 +0000 UTC m=+46.418378089" watchObservedRunningTime="2026-03-07 01:28:56.077307834 +0000 UTC m=+48.092468420" Mar 7 01:28:56.438918 containerd[1739]: time="2026-03-07T01:28:56.438246523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:56.441459 containerd[1739]: time="2026-03-07T01:28:56.441432809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 01:28:56.445138 containerd[1739]: time="2026-03-07T01:28:56.445116377Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:56.450289 containerd[1739]: time="2026-03-07T01:28:56.450246748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:56.451144 containerd[1739]: time="2026-03-07T01:28:56.451112270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.653185238s" Mar 7 01:28:56.451339 containerd[1739]: time="2026-03-07T01:28:56.451232310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 01:28:56.452615 containerd[1739]: time="2026-03-07T01:28:56.452590553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:28:56.461542 containerd[1739]: time="2026-03-07T01:28:56.461390692Z" level=info msg="CreateContainer within sandbox \"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:28:56.501340 containerd[1739]: time="2026-03-07T01:28:56.501226857Z" level=info msg="CreateContainer within sandbox \"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9640b1d3eff556ff195ee973222d0c42a2dcf2733d523446b34da86cc285e9e4\"" Mar 7 01:28:56.514285 containerd[1739]: time="2026-03-07T01:28:56.514249724Z" level=info msg="StartContainer for \"9640b1d3eff556ff195ee973222d0c42a2dcf2733d523446b34da86cc285e9e4\"" Mar 7 01:28:56.553166 systemd[1]: Started cri-containerd-9640b1d3eff556ff195ee973222d0c42a2dcf2733d523446b34da86cc285e9e4.scope - libcontainer container 9640b1d3eff556ff195ee973222d0c42a2dcf2733d523446b34da86cc285e9e4. Mar 7 01:28:56.599567 containerd[1739]: time="2026-03-07T01:28:56.599525986Z" level=info msg="StartContainer for \"9640b1d3eff556ff195ee973222d0c42a2dcf2733d523446b34da86cc285e9e4\" returns successfully" Mar 7 01:28:56.798708 containerd[1739]: time="2026-03-07T01:28:56.798014569Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:56.801152 containerd[1739]: time="2026-03-07T01:28:56.801124575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:28:56.803387 containerd[1739]: time="2026-03-07T01:28:56.803355780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 350.730747ms" Mar 7 01:28:56.803496 containerd[1739]: time="2026-03-07T01:28:56.803480420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 01:28:56.804502 containerd[1739]: time="2026-03-07T01:28:56.804485582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:28:56.812638 containerd[1739]: time="2026-03-07T01:28:56.812605760Z" level=info msg="CreateContainer within sandbox \"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:28:56.855833 containerd[1739]: time="2026-03-07T01:28:56.855715692Z" level=info msg="CreateContainer within sandbox \"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7051c1b3d1034a02a9f6ff099e8a2d57ed52a92a2c31a2bb5d65b123a5fe9100\"" Mar 7 01:28:56.857423 containerd[1739]: time="2026-03-07T01:28:56.856468173Z" level=info msg="StartContainer for \"7051c1b3d1034a02a9f6ff099e8a2d57ed52a92a2c31a2bb5d65b123a5fe9100\"" Mar 7 01:28:56.885130 systemd[1]: Started cri-containerd-7051c1b3d1034a02a9f6ff099e8a2d57ed52a92a2c31a2bb5d65b123a5fe9100.scope - libcontainer container 7051c1b3d1034a02a9f6ff099e8a2d57ed52a92a2c31a2bb5d65b123a5fe9100. Mar 7 01:28:56.921443 containerd[1739]: time="2026-03-07T01:28:56.921321991Z" level=info msg="StartContainer for \"7051c1b3d1034a02a9f6ff099e8a2d57ed52a92a2c31a2bb5d65b123a5fe9100\" returns successfully" Mar 7 01:28:57.450340 kubelet[3162]: I0307 01:28:57.449636 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-765cff7995-mchvs" podStartSLOduration=24.112448732 podStartE2EDuration="30.449618876s" podCreationTimestamp="2026-03-07 01:28:27 +0000 UTC" firstStartedPulling="2026-03-07 01:28:50.467129678 +0000 UTC m=+42.482290264" lastFinishedPulling="2026-03-07 01:28:56.804299862 +0000 UTC m=+48.819460408" observedRunningTime="2026-03-07 01:28:57.413299319 +0000 UTC m=+49.428459905" watchObservedRunningTime="2026-03-07 01:28:57.449618876 +0000 UTC m=+49.464779462" Mar 7 01:28:58.628715 kubelet[3162]: I0307 01:28:58.628079 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:28:59.682253 containerd[1739]: time="2026-03-07T01:28:59.682187870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:59.687288 containerd[1739]: time="2026-03-07T01:28:59.687133401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 01:28:59.690967 containerd[1739]: time="2026-03-07T01:28:59.690863849Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:59.698561 containerd[1739]: time="2026-03-07T01:28:59.698332425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:28:59.699124 containerd[1739]: time="2026-03-07T01:28:59.699068826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.894439323s" Mar 7 01:28:59.699124 containerd[1739]: time="2026-03-07T01:28:59.699099266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 01:28:59.728099 containerd[1739]: time="2026-03-07T01:28:59.727859487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:28:59.729273 containerd[1739]: time="2026-03-07T01:28:59.729244050Z" level=info msg="CreateContainer within sandbox \"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:28:59.780347 containerd[1739]: time="2026-03-07T01:28:59.780270119Z" level=info msg="CreateContainer within sandbox \"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"63e480a993743780e7874f9dd9338ade076965c32b2ab320d6861065480271b4\"" Mar 7 01:28:59.781284 containerd[1739]: time="2026-03-07T01:28:59.781148281Z" level=info msg="StartContainer for \"63e480a993743780e7874f9dd9338ade076965c32b2ab320d6861065480271b4\"" Mar 7 01:28:59.839179 systemd[1]: Started cri-containerd-63e480a993743780e7874f9dd9338ade076965c32b2ab320d6861065480271b4.scope - libcontainer container 63e480a993743780e7874f9dd9338ade076965c32b2ab320d6861065480271b4. Mar 7 01:28:59.877698 containerd[1739]: time="2026-03-07T01:28:59.877405646Z" level=info msg="StartContainer for \"63e480a993743780e7874f9dd9338ade076965c32b2ab320d6861065480271b4\" returns successfully" Mar 7 01:29:00.416679 kubelet[3162]: I0307 01:29:00.416328 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xfb4w" podStartSLOduration=26.053239515 podStartE2EDuration="32.416311953s" podCreationTimestamp="2026-03-07 01:28:28 +0000 UTC" firstStartedPulling="2026-03-07 01:28:50.089382875 +0000 UTC m=+42.104543461" lastFinishedPulling="2026-03-07 01:28:56.452455313 +0000 UTC m=+48.467615899" observedRunningTime="2026-03-07 01:28:57.450403518 +0000 UTC m=+49.465564104" watchObservedRunningTime="2026-03-07 01:29:00.416311953 +0000 UTC m=+52.431472539" Mar 7 01:29:00.418645 kubelet[3162]: I0307 01:29:00.418299 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b4f64765f-kwd8x" podStartSLOduration=21.198138077 podStartE2EDuration="30.418277238s" podCreationTimestamp="2026-03-07 01:28:30 +0000 UTC" firstStartedPulling="2026-03-07 01:28:50.48118655 +0000 UTC m=+42.496347136" lastFinishedPulling="2026-03-07 01:28:59.701325711 +0000 UTC m=+51.716486297" observedRunningTime="2026-03-07 01:29:00.415879712 +0000 UTC m=+52.431040338" watchObservedRunningTime="2026-03-07 01:29:00.418277238 +0000 UTC m=+52.433437824" Mar 7 01:29:01.192573 containerd[1739]: time="2026-03-07T01:29:01.192533126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:01.196846 containerd[1739]: time="2026-03-07T01:29:01.196798255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 01:29:01.200136 containerd[1739]: time="2026-03-07T01:29:01.200105182Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:01.208659 containerd[1739]: time="2026-03-07T01:29:01.208564600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:01.210353 containerd[1739]: time="2026-03-07T01:29:01.209805923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.481904756s" Mar 7 01:29:01.210353 containerd[1739]: time="2026-03-07T01:29:01.209844243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 01:29:01.211625 containerd[1739]: time="2026-03-07T01:29:01.211354526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:29:01.221173 containerd[1739]: time="2026-03-07T01:29:01.221134467Z" level=info msg="CreateContainer within sandbox \"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:29:01.261733 containerd[1739]: time="2026-03-07T01:29:01.261621793Z" level=info msg="CreateContainer within sandbox \"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5705dc2108e0589941eb84bda70235b7d5958528b14c35d5f235f50168b62264\"" Mar 7 01:29:01.265023 containerd[1739]: time="2026-03-07T01:29:01.262489395Z" level=info msg="StartContainer for \"5705dc2108e0589941eb84bda70235b7d5958528b14c35d5f235f50168b62264\"" Mar 7 01:29:01.296161 systemd[1]: Started cri-containerd-5705dc2108e0589941eb84bda70235b7d5958528b14c35d5f235f50168b62264.scope - libcontainer container 5705dc2108e0589941eb84bda70235b7d5958528b14c35d5f235f50168b62264. Mar 7 01:29:01.330470 containerd[1739]: time="2026-03-07T01:29:01.330426020Z" level=info msg="StartContainer for \"5705dc2108e0589941eb84bda70235b7d5958528b14c35d5f235f50168b62264\" returns successfully" Mar 7 01:29:02.629511 containerd[1739]: time="2026-03-07T01:29:02.629459065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:02.633153 containerd[1739]: time="2026-03-07T01:29:02.632976350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 01:29:02.636692 containerd[1739]: time="2026-03-07T01:29:02.636446154Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:02.641368 containerd[1739]: time="2026-03-07T01:29:02.641325961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:02.642286 containerd[1739]: time="2026-03-07T01:29:02.642255322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.430869636s" Mar 7 01:29:02.642372 containerd[1739]: time="2026-03-07T01:29:02.642288162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 01:29:02.645460 containerd[1739]: time="2026-03-07T01:29:02.645275686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:29:02.651793 containerd[1739]: time="2026-03-07T01:29:02.651762655Z" level=info msg="CreateContainer within sandbox \"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:29:02.688817 containerd[1739]: time="2026-03-07T01:29:02.688777665Z" level=info msg="CreateContainer within sandbox \"8b0245baa7cdd74796e7d2ae5daf57dcac5f940b7d359e1f0acbcc05f4599705\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9d854a07db332690491b2e6d4b06153e1bbd5446f15fe5451f7fef6ea4c0a3ab\"" Mar 7 01:29:02.689571 containerd[1739]: time="2026-03-07T01:29:02.689535066Z" level=info msg="StartContainer for \"9d854a07db332690491b2e6d4b06153e1bbd5446f15fe5451f7fef6ea4c0a3ab\"" Mar 7 01:29:02.724134 systemd[1]: Started cri-containerd-9d854a07db332690491b2e6d4b06153e1bbd5446f15fe5451f7fef6ea4c0a3ab.scope - libcontainer container 9d854a07db332690491b2e6d4b06153e1bbd5446f15fe5451f7fef6ea4c0a3ab. Mar 7 01:29:02.755804 containerd[1739]: time="2026-03-07T01:29:02.755493594Z" level=info msg="StartContainer for \"9d854a07db332690491b2e6d4b06153e1bbd5446f15fe5451f7fef6ea4c0a3ab\" returns successfully" Mar 7 01:29:03.185071 kubelet[3162]: I0307 01:29:03.184490 3162 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:29:03.185071 kubelet[3162]: I0307 01:29:03.184519 3162 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:29:03.424944 kubelet[3162]: I0307 01:29:03.424879 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-66vcb" podStartSLOduration=19.630482957 podStartE2EDuration="33.424860294s" podCreationTimestamp="2026-03-07 01:28:30 +0000 UTC" firstStartedPulling="2026-03-07 01:28:48.849157347 +0000 UTC m=+40.864317933" lastFinishedPulling="2026-03-07 01:29:02.643534724 +0000 UTC m=+54.658695270" observedRunningTime="2026-03-07 01:29:03.423901573 +0000 UTC m=+55.439062159" watchObservedRunningTime="2026-03-07 01:29:03.424860294 +0000 UTC m=+55.440020880" Mar 7 01:29:04.132935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617900732.mount: Deactivated successfully. Mar 7 01:29:04.238928 containerd[1739]: time="2026-03-07T01:29:04.238876828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:04.242244 containerd[1739]: time="2026-03-07T01:29:04.242208472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 01:29:04.251874 containerd[1739]: time="2026-03-07T01:29:04.251823725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.606511319s" Mar 7 01:29:04.253025 containerd[1739]: time="2026-03-07T01:29:04.252996767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 01:29:04.255893 containerd[1739]: time="2026-03-07T01:29:04.255845171Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:04.257125 containerd[1739]: time="2026-03-07T01:29:04.257099732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:29:04.261972 containerd[1739]: time="2026-03-07T01:29:04.261850779Z" level=info msg="CreateContainer within sandbox \"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:29:04.300396 containerd[1739]: time="2026-03-07T01:29:04.300275190Z" level=info msg="CreateContainer within sandbox \"df4293c84739d733c2d8c334285e2e666102b163a9bf6f695c89ee9227618476\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7de416d2d9342983e05216ec6e531bd2934798e47a1be43311fce92d5017a9df\"" Mar 7 01:29:04.301182 containerd[1739]: time="2026-03-07T01:29:04.301156032Z" level=info msg="StartContainer for \"7de416d2d9342983e05216ec6e531bd2934798e47a1be43311fce92d5017a9df\"" Mar 7 01:29:04.337183 systemd[1]: Started cri-containerd-7de416d2d9342983e05216ec6e531bd2934798e47a1be43311fce92d5017a9df.scope - libcontainer container 7de416d2d9342983e05216ec6e531bd2934798e47a1be43311fce92d5017a9df. Mar 7 01:29:04.380364 containerd[1739]: time="2026-03-07T01:29:04.380237858Z" level=info msg="StartContainer for \"7de416d2d9342983e05216ec6e531bd2934798e47a1be43311fce92d5017a9df\" returns successfully" Mar 7 01:29:04.429287 kubelet[3162]: I0307 01:29:04.427929 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-656fbb6f58-fhhm2" podStartSLOduration=1.779248141 podStartE2EDuration="15.427914842s" podCreationTimestamp="2026-03-07 01:28:49 +0000 UTC" firstStartedPulling="2026-03-07 01:28:50.605726868 +0000 UTC m=+42.620887454" lastFinishedPulling="2026-03-07 01:29:04.254393569 +0000 UTC m=+56.269554155" observedRunningTime="2026-03-07 01:29:04.425919279 +0000 UTC m=+56.441079865" watchObservedRunningTime="2026-03-07 01:29:04.427914842 +0000 UTC m=+56.443075428" Mar 7 01:29:08.110273 containerd[1739]: time="2026-03-07T01:29:08.110233324Z" level=info msg="StopPodSandbox for \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\"" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.148 [WARNING][5737] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f1d81093-838f-42f2-bd2e-44a2be2ca4cf", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65", Pod:"coredns-674b8bbfcf-2vtc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11fbbe5af9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.150 [INFO][5737] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.150 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" iface="eth0" netns="" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.150 [INFO][5737] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.150 [INFO][5737] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.176 [INFO][5744] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.176 [INFO][5744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.176 [INFO][5744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.187 [WARNING][5744] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.187 [INFO][5744] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.188 [INFO][5744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.192194 containerd[1739]: 2026-03-07 01:29:08.190 [INFO][5737] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.192659 containerd[1739]: time="2026-03-07T01:29:08.192241066Z" level=info msg="TearDown network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" successfully" Mar 7 01:29:08.192659 containerd[1739]: time="2026-03-07T01:29:08.192267946Z" level=info msg="StopPodSandbox for \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" returns successfully" Mar 7 01:29:08.193245 containerd[1739]: time="2026-03-07T01:29:08.192864427Z" level=info msg="RemovePodSandbox for \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\"" Mar 7 01:29:08.193245 containerd[1739]: time="2026-03-07T01:29:08.192893987Z" level=info msg="Forcibly stopping sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\"" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.226 [WARNING][5758] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f1d81093-838f-42f2-bd2e-44a2be2ca4cf", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"1cf47fb35e8235c3e5148af5c05d81657fa13baf6d8c1f6c95ef99e8de151a65", Pod:"coredns-674b8bbfcf-2vtc8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11fbbe5af9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.226 [INFO][5758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.226 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" iface="eth0" netns="" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.226 [INFO][5758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.226 [INFO][5758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.247 [INFO][5765] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.247 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.247 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.256 [WARNING][5765] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.256 [INFO][5765] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" HandleID="k8s-pod-network.370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--2vtc8-eth0" Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.258 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.261405 containerd[1739]: 2026-03-07 01:29:08.259 [INFO][5758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65" Mar 7 01:29:08.262552 containerd[1739]: time="2026-03-07T01:29:08.261869140Z" level=info msg="TearDown network for sandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" successfully" Mar 7 01:29:08.274476 containerd[1739]: time="2026-03-07T01:29:08.274432968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:08.274734 containerd[1739]: time="2026-03-07T01:29:08.274714568Z" level=info msg="RemovePodSandbox \"370a071ca9faa0bc9530f9148a3e5c4767258216c4099b17ba9f9f7de60dcb65\" returns successfully" Mar 7 01:29:08.275341 containerd[1739]: time="2026-03-07T01:29:08.275302610Z" level=info msg="StopPodSandbox for \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\"" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.310 [WARNING][5779] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"73e5dc10-ec6b-44b7-a83a-ebda472da3b6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259", Pod:"calico-apiserver-765cff7995-mchvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61ed460240d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.311 [INFO][5779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.311 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" iface="eth0" netns="" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.311 [INFO][5779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.311 [INFO][5779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.329 [INFO][5787] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.329 [INFO][5787] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.329 [INFO][5787] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.338 [WARNING][5787] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.338 [INFO][5787] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.339 [INFO][5787] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.343179 containerd[1739]: 2026-03-07 01:29:08.341 [INFO][5779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.343910 containerd[1739]: time="2026-03-07T01:29:08.343646841Z" level=info msg="TearDown network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" successfully" Mar 7 01:29:08.343910 containerd[1739]: time="2026-03-07T01:29:08.343675761Z" level=info msg="StopPodSandbox for \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" returns successfully" Mar 7 01:29:08.344468 containerd[1739]: time="2026-03-07T01:29:08.344116762Z" level=info msg="RemovePodSandbox for \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\"" Mar 7 01:29:08.344468 containerd[1739]: time="2026-03-07T01:29:08.344143682Z" level=info msg="Forcibly stopping sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\"" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.375 [WARNING][5801] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"73e5dc10-ec6b-44b7-a83a-ebda472da3b6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"aaffd53ca8358869ca7fa325684eb87b74379299088e7240dcfaed0fabc72259", Pod:"calico-apiserver-765cff7995-mchvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61ed460240d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.375 [INFO][5801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.375 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" iface="eth0" netns="" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.375 [INFO][5801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.375 [INFO][5801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.393 [INFO][5808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.393 [INFO][5808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.393 [INFO][5808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.403 [WARNING][5808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.403 [INFO][5808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" HandleID="k8s-pod-network.823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--mchvs-eth0" Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.404 [INFO][5808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.408236 containerd[1739]: 2026-03-07 01:29:08.406 [INFO][5801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe" Mar 7 01:29:08.410311 containerd[1739]: time="2026-03-07T01:29:08.409982108Z" level=info msg="TearDown network for sandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" successfully" Mar 7 01:29:08.420384 containerd[1739]: time="2026-03-07T01:29:08.420343331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:08.420527 containerd[1739]: time="2026-03-07T01:29:08.420409211Z" level=info msg="RemovePodSandbox \"823e41eff9b925422a6312ec185de9e7ac8c44f4bf9074f6c23439c9acf95bfe\" returns successfully" Mar 7 01:29:08.420794 containerd[1739]: time="2026-03-07T01:29:08.420772812Z" level=info msg="StopPodSandbox for \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\"" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.453 [WARNING][5822] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c", Pod:"calico-apiserver-765cff7995-szzxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c05fd67a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.454 [INFO][5822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.454 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" iface="eth0" netns="" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.454 [INFO][5822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.454 [INFO][5822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.473 [INFO][5829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.473 [INFO][5829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.473 [INFO][5829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.482 [WARNING][5829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.482 [INFO][5829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.483 [INFO][5829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.487226 containerd[1739]: 2026-03-07 01:29:08.485 [INFO][5822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.488047 containerd[1739]: time="2026-03-07T01:29:08.487267640Z" level=info msg="TearDown network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" successfully" Mar 7 01:29:08.488047 containerd[1739]: time="2026-03-07T01:29:08.487292280Z" level=info msg="StopPodSandbox for \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" returns successfully" Mar 7 01:29:08.488047 containerd[1739]: time="2026-03-07T01:29:08.488013801Z" level=info msg="RemovePodSandbox for \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\"" Mar 7 01:29:08.488047 containerd[1739]: time="2026-03-07T01:29:08.488040041Z" level=info msg="Forcibly stopping sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\"" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.520 [WARNING][5843] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0", GenerateName:"calico-apiserver-765cff7995-", Namespace:"calico-system", SelfLink:"", UID:"e2a5fc71-1b77-4f5e-a88c-b160d32eae5f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"765cff7995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"053bd5bd279cb2636feb8e8086f0e28a13c6400af72b3850f36f5f2bfc597d6c", Pod:"calico-apiserver-765cff7995-szzxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1c05fd67a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.520 [INFO][5843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.520 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" iface="eth0" netns="" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.520 [INFO][5843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.520 [INFO][5843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.539 [INFO][5850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.539 [INFO][5850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.540 [INFO][5850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.548 [WARNING][5850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.549 [INFO][5850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" HandleID="k8s-pod-network.7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--apiserver--765cff7995--szzxj-eth0" Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.550 [INFO][5850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.555828 containerd[1739]: 2026-03-07 01:29:08.552 [INFO][5843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01" Mar 7 01:29:08.556254 containerd[1739]: time="2026-03-07T01:29:08.555881672Z" level=info msg="TearDown network for sandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" successfully" Mar 7 01:29:08.564526 containerd[1739]: time="2026-03-07T01:29:08.564467131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:08.564639 containerd[1739]: time="2026-03-07T01:29:08.564581411Z" level=info msg="RemovePodSandbox \"7c853f41f88100a65e0f06f052b2d91b4b4b15b796c06e3f30560b7b4dde2c01\" returns successfully" Mar 7 01:29:08.565026 containerd[1739]: time="2026-03-07T01:29:08.565003252Z" level=info msg="StopPodSandbox for \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\"" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.597 [WARNING][5864] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.597 [INFO][5864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.598 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" iface="eth0" netns="" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.598 [INFO][5864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.598 [INFO][5864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.617 [INFO][5871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.617 [INFO][5871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.617 [INFO][5871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.626 [WARNING][5871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.626 [INFO][5871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.627 [INFO][5871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.631004 containerd[1739]: 2026-03-07 01:29:08.629 [INFO][5864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.631504 containerd[1739]: time="2026-03-07T01:29:08.631121918Z" level=info msg="TearDown network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" successfully" Mar 7 01:29:08.631504 containerd[1739]: time="2026-03-07T01:29:08.631150758Z" level=info msg="StopPodSandbox for \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" returns successfully" Mar 7 01:29:08.632291 containerd[1739]: time="2026-03-07T01:29:08.631965720Z" level=info msg="RemovePodSandbox for \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\"" Mar 7 01:29:08.632291 containerd[1739]: time="2026-03-07T01:29:08.632017360Z" level=info msg="Forcibly stopping sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\"" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.672 [WARNING][5885] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" WorkloadEndpoint="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.672 [INFO][5885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.672 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" iface="eth0" netns="" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.672 [INFO][5885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.672 [INFO][5885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.694 [INFO][5894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.695 [INFO][5894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.695 [INFO][5894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.703 [WARNING][5894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.703 [INFO][5894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" HandleID="k8s-pod-network.5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Workload="ci--4081.3.6--n--24b0a814a4-k8s-whisker--5b8f788665--slvv4-eth0" Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.704 [INFO][5894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.709085 containerd[1739]: 2026-03-07 01:29:08.706 [INFO][5885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49" Mar 7 01:29:08.709085 containerd[1739]: time="2026-03-07T01:29:08.708106009Z" level=info msg="TearDown network for sandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" successfully" Mar 7 01:29:08.718760 containerd[1739]: time="2026-03-07T01:29:08.718711753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:08.718949 containerd[1739]: time="2026-03-07T01:29:08.718795353Z" level=info msg="RemovePodSandbox \"5c7f180d6afadbcea47957cda95871b5c597ab55f4af2aa7e7523db6fac60e49\" returns successfully" Mar 7 01:29:08.719669 containerd[1739]: time="2026-03-07T01:29:08.719347154Z" level=info msg="StopPodSandbox for \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\"" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.752 [WARNING][5908] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83", Pod:"goldmane-5b85766d88-xfb4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedfcfd3a75e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.752 [INFO][5908] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.752 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" iface="eth0" netns="" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.752 [INFO][5908] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.752 [INFO][5908] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.776 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.776 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.776 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.786 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.786 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.788 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.791269 containerd[1739]: 2026-03-07 01:29:08.789 [INFO][5908] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.791869 containerd[1739]: time="2026-03-07T01:29:08.791741274Z" level=info msg="TearDown network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" successfully" Mar 7 01:29:08.791869 containerd[1739]: time="2026-03-07T01:29:08.791779675Z" level=info msg="StopPodSandbox for \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" returns successfully" Mar 7 01:29:08.792626 containerd[1739]: time="2026-03-07T01:29:08.792501916Z" level=info msg="RemovePodSandbox for \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\"" Mar 7 01:29:08.792626 containerd[1739]: time="2026-03-07T01:29:08.792530996Z" level=info msg="Forcibly stopping sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\"" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.827 [WARNING][5930] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"90c77c2c-38f6-4e7b-a0d0-324bcdac7ea5", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"2fb0d44a8f03f47096785d5a118c3e3765742c645fa71f154fdeb56f58a29c83", Pod:"goldmane-5b85766d88-xfb4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliedfcfd3a75e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.827 [INFO][5930] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.827 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" iface="eth0" netns="" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.827 [INFO][5930] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.827 [INFO][5930] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.847 [INFO][5937] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.847 [INFO][5937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.847 [INFO][5937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.856 [WARNING][5937] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.856 [INFO][5937] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" HandleID="k8s-pod-network.832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Workload="ci--4081.3.6--n--24b0a814a4-k8s-goldmane--5b85766d88--xfb4w-eth0" Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.857 [INFO][5937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.861031 containerd[1739]: 2026-03-07 01:29:08.859 [INFO][5930] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045" Mar 7 01:29:08.862412 containerd[1739]: time="2026-03-07T01:29:08.861050708Z" level=info msg="TearDown network for sandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" successfully" Mar 7 01:29:08.869282 containerd[1739]: time="2026-03-07T01:29:08.869241446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:08.869387 containerd[1739]: time="2026-03-07T01:29:08.869316406Z" level=info msg="RemovePodSandbox \"832c2581526a87f2078fe9128de2a6cd5de5a500973c1e212901f3cc715e5045\" returns successfully" Mar 7 01:29:08.870013 containerd[1739]: time="2026-03-07T01:29:08.869818087Z" level=info msg="StopPodSandbox for \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\"" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.902 [WARNING][5951] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071a7194-ad19-4e52-9195-4caf3f158140", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811", Pod:"coredns-674b8bbfcf-cmk8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dc28c71b40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.903 [INFO][5951] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.903 [INFO][5951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" iface="eth0" netns="" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.903 [INFO][5951] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.903 [INFO][5951] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.921 [INFO][5958] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.921 [INFO][5958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.921 [INFO][5958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.930 [WARNING][5958] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.930 [INFO][5958] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.931 [INFO][5958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:08.935917 containerd[1739]: 2026-03-07 01:29:08.934 [INFO][5951] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:08.936582 containerd[1739]: time="2026-03-07T01:29:08.936440875Z" level=info msg="TearDown network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" successfully" Mar 7 01:29:08.936582 containerd[1739]: time="2026-03-07T01:29:08.936472435Z" level=info msg="StopPodSandbox for \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" returns successfully" Mar 7 01:29:08.937176 containerd[1739]: time="2026-03-07T01:29:08.936897036Z" level=info msg="RemovePodSandbox for \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\"" Mar 7 01:29:08.937176 containerd[1739]: time="2026-03-07T01:29:08.936924756Z" level=info msg="Forcibly stopping sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\"" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.969 [WARNING][5972] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"071a7194-ad19-4e52-9195-4caf3f158140", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"55e9bb491aec8a059fd2d2e02c6977beb08acb759e5e8c29272199785ff21811", Pod:"coredns-674b8bbfcf-cmk8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6dc28c71b40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.969 [INFO][5972] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.969 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" iface="eth0" netns="" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.969 [INFO][5972] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.969 [INFO][5972] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.987 [INFO][5979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.988 [INFO][5979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.988 [INFO][5979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.996 [WARNING][5979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.996 [INFO][5979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" HandleID="k8s-pod-network.090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Workload="ci--4081.3.6--n--24b0a814a4-k8s-coredns--674b8bbfcf--cmk8x-eth0" Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:08.998 [INFO][5979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:09.003162 containerd[1739]: 2026-03-07 01:29:09.000 [INFO][5972] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf" Mar 7 01:29:09.004529 containerd[1739]: time="2026-03-07T01:29:09.003588904Z" level=info msg="TearDown network for sandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" successfully" Mar 7 01:29:09.012109 containerd[1739]: time="2026-03-07T01:29:09.012074363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:09.012257 containerd[1739]: time="2026-03-07T01:29:09.012240483Z" level=info msg="RemovePodSandbox \"090761589a8bb3a36eb10dca82a44142995354e454e7fb73ce8368e49145dedf\" returns successfully" Mar 7 01:29:09.012781 containerd[1739]: time="2026-03-07T01:29:09.012757924Z" level=info msg="StopPodSandbox for \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\"" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.044 [WARNING][5993] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0", GenerateName:"calico-kube-controllers-7b4f64765f-", Namespace:"calico-system", SelfLink:"", UID:"b138cb7e-09ad-4994-8633-ecd968afa99c", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4f64765f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b", Pod:"calico-kube-controllers-7b4f64765f-kwd8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacc68ccd682", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.044 [INFO][5993] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.044 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" iface="eth0" netns="" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.044 [INFO][5993] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.044 [INFO][5993] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.062 [INFO][6001] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.062 [INFO][6001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.062 [INFO][6001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.071 [WARNING][6001] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.071 [INFO][6001] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.073 [INFO][6001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:09.076388 containerd[1739]: 2026-03-07 01:29:09.074 [INFO][5993] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.076388 containerd[1739]: time="2026-03-07T01:29:09.076314305Z" level=info msg="TearDown network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" successfully" Mar 7 01:29:09.076388 containerd[1739]: time="2026-03-07T01:29:09.076338905Z" level=info msg="StopPodSandbox for \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" returns successfully" Mar 7 01:29:09.077464 containerd[1739]: time="2026-03-07T01:29:09.076786786Z" level=info msg="RemovePodSandbox for \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\"" Mar 7 01:29:09.077464 containerd[1739]: time="2026-03-07T01:29:09.076815866Z" level=info msg="Forcibly stopping sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\"" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.108 [WARNING][6016] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0", GenerateName:"calico-kube-controllers-7b4f64765f-", Namespace:"calico-system", SelfLink:"", UID:"b138cb7e-09ad-4994-8633-ecd968afa99c", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 28, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4f64765f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-24b0a814a4", ContainerID:"41539d0ec6865f2d10b758d33017b3fa4ca85046078fb68c8d95650b89136f3b", Pod:"calico-kube-controllers-7b4f64765f-kwd8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacc68ccd682", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.108 [INFO][6016] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.108 [INFO][6016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" iface="eth0" netns="" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.109 [INFO][6016] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.109 [INFO][6016] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.128 [INFO][6023] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.128 [INFO][6023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.128 [INFO][6023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.143 [WARNING][6023] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.143 [INFO][6023] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" HandleID="k8s-pod-network.c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Workload="ci--4081.3.6--n--24b0a814a4-k8s-calico--kube--controllers--7b4f64765f--kwd8x-eth0" Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.145 [INFO][6023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:29:09.149183 containerd[1739]: 2026-03-07 01:29:09.147 [INFO][6016] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7" Mar 7 01:29:09.149822 containerd[1739]: time="2026-03-07T01:29:09.149234787Z" level=info msg="TearDown network for sandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" successfully" Mar 7 01:29:09.158798 containerd[1739]: time="2026-03-07T01:29:09.158736448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:29:09.158921 containerd[1739]: time="2026-03-07T01:29:09.158817728Z" level=info msg="RemovePodSandbox \"c8207317b80fbb18df49dfedaf4737a30abcb592803a03280660058e34cfefe7\" returns successfully" Mar 7 01:29:46.151113 kubelet[3162]: I0307 01:29:46.151077 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:30:22.730268 systemd[1]: Started sshd@7-10.200.20.41:22-10.200.16.10:56190.service - OpenSSH per-connection server daemon (10.200.16.10:56190). Mar 7 01:30:23.222269 sshd[6304]: Accepted publickey for core from 10.200.16.10 port 56190 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:23.224786 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:23.229299 systemd-logind[1711]: New session 10 of user core. Mar 7 01:30:23.232162 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:30:23.648741 sshd[6304]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:23.652182 systemd-logind[1711]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:30:23.653130 systemd[1]: sshd@7-10.200.20.41:22-10.200.16.10:56190.service: Deactivated successfully. Mar 7 01:30:23.654971 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:30:23.657853 systemd-logind[1711]: Removed session 10. Mar 7 01:30:28.742305 systemd[1]: Started sshd@8-10.200.20.41:22-10.200.16.10:56194.service - OpenSSH per-connection server daemon (10.200.16.10:56194). Mar 7 01:30:29.245551 sshd[6373]: Accepted publickey for core from 10.200.16.10 port 56194 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:29.247543 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:29.251681 systemd-logind[1711]: New session 11 of user core. Mar 7 01:30:29.255156 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:30:29.669216 sshd[6373]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:29.672344 systemd[1]: sshd@8-10.200.20.41:22-10.200.16.10:56194.service: Deactivated successfully. Mar 7 01:30:29.674295 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:30:29.675101 systemd-logind[1711]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:30:29.676468 systemd-logind[1711]: Removed session 11. Mar 7 01:30:34.762521 systemd[1]: Started sshd@9-10.200.20.41:22-10.200.16.10:60728.service - OpenSSH per-connection server daemon (10.200.16.10:60728). Mar 7 01:30:35.255597 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 60728 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:35.257296 sshd[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:35.261857 systemd-logind[1711]: New session 12 of user core. Mar 7 01:30:35.269391 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:30:35.668943 sshd[6409]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:35.671910 systemd[1]: sshd@9-10.200.20.41:22-10.200.16.10:60728.service: Deactivated successfully. Mar 7 01:30:35.674337 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:30:35.676764 systemd-logind[1711]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:30:35.677841 systemd-logind[1711]: Removed session 12. Mar 7 01:30:40.761242 systemd[1]: Started sshd@10-10.200.20.41:22-10.200.16.10:51246.service - OpenSSH per-connection server daemon (10.200.16.10:51246). Mar 7 01:30:41.247331 sshd[6422]: Accepted publickey for core from 10.200.16.10 port 51246 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:41.248176 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:41.252385 systemd-logind[1711]: New session 13 of user core. Mar 7 01:30:41.258142 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:30:41.658160 sshd[6422]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:41.662584 systemd-logind[1711]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:30:41.663123 systemd[1]: sshd@10-10.200.20.41:22-10.200.16.10:51246.service: Deactivated successfully. Mar 7 01:30:41.665381 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:30:41.666677 systemd-logind[1711]: Removed session 13. Mar 7 01:30:41.747309 systemd[1]: Started sshd@11-10.200.20.41:22-10.200.16.10:51256.service - OpenSSH per-connection server daemon (10.200.16.10:51256). Mar 7 01:30:42.238800 sshd[6435]: Accepted publickey for core from 10.200.16.10 port 51256 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:42.240242 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:42.245488 systemd-logind[1711]: New session 14 of user core. Mar 7 01:30:42.252143 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:30:42.685877 sshd[6435]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:42.689832 systemd[1]: sshd@11-10.200.20.41:22-10.200.16.10:51256.service: Deactivated successfully. Mar 7 01:30:42.693600 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:30:42.694647 systemd-logind[1711]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:30:42.695510 systemd-logind[1711]: Removed session 14. Mar 7 01:30:42.771925 systemd[1]: Started sshd@12-10.200.20.41:22-10.200.16.10:51270.service - OpenSSH per-connection server daemon (10.200.16.10:51270). Mar 7 01:30:43.258299 sshd[6446]: Accepted publickey for core from 10.200.16.10 port 51270 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:43.259732 sshd[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:43.263344 systemd-logind[1711]: New session 15 of user core. Mar 7 01:30:43.274150 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:30:43.714691 sshd[6446]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:43.718813 systemd[1]: sshd@12-10.200.20.41:22-10.200.16.10:51270.service: Deactivated successfully. Mar 7 01:30:43.722710 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:30:43.723535 systemd-logind[1711]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:30:43.724511 systemd-logind[1711]: Removed session 15. Mar 7 01:30:48.806178 systemd[1]: Started sshd@13-10.200.20.41:22-10.200.16.10:51282.service - OpenSSH per-connection server daemon (10.200.16.10:51282). Mar 7 01:30:49.306917 sshd[6481]: Accepted publickey for core from 10.200.16.10 port 51282 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:49.308537 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:49.313863 systemd-logind[1711]: New session 16 of user core. Mar 7 01:30:49.317240 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:30:49.730575 sshd[6481]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:49.734185 systemd-logind[1711]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:30:49.734887 systemd[1]: sshd@13-10.200.20.41:22-10.200.16.10:51282.service: Deactivated successfully. Mar 7 01:30:49.736949 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:30:49.738305 systemd-logind[1711]: Removed session 16. Mar 7 01:30:54.822220 systemd[1]: Started sshd@14-10.200.20.41:22-10.200.16.10:56346.service - OpenSSH per-connection server daemon (10.200.16.10:56346). Mar 7 01:30:55.305614 sshd[6514]: Accepted publickey for core from 10.200.16.10 port 56346 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:30:55.306578 sshd[6514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:55.310672 systemd-logind[1711]: New session 17 of user core. Mar 7 01:30:55.313182 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:30:55.723757 sshd[6514]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:55.726719 systemd-logind[1711]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:30:55.727015 systemd[1]: sshd@14-10.200.20.41:22-10.200.16.10:56346.service: Deactivated successfully. Mar 7 01:30:55.729364 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:30:55.731609 systemd-logind[1711]: Removed session 17. Mar 7 01:31:00.812228 systemd[1]: Started sshd@15-10.200.20.41:22-10.200.16.10:50384.service - OpenSSH per-connection server daemon (10.200.16.10:50384). Mar 7 01:31:01.305880 sshd[6583]: Accepted publickey for core from 10.200.16.10 port 50384 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:01.306759 sshd[6583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:01.310616 systemd-logind[1711]: New session 18 of user core. Mar 7 01:31:01.319129 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:31:01.715369 sshd[6583]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:01.718845 systemd[1]: sshd@15-10.200.20.41:22-10.200.16.10:50384.service: Deactivated successfully. Mar 7 01:31:01.721175 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:31:01.723086 systemd-logind[1711]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:31:01.723953 systemd-logind[1711]: Removed session 18. Mar 7 01:31:06.809232 systemd[1]: Started sshd@16-10.200.20.41:22-10.200.16.10:50398.service - OpenSSH per-connection server daemon (10.200.16.10:50398). Mar 7 01:31:07.294263 sshd[6596]: Accepted publickey for core from 10.200.16.10 port 50398 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:07.295113 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:07.299815 systemd-logind[1711]: New session 19 of user core. Mar 7 01:31:07.305127 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:31:07.705226 sshd[6596]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:07.708545 systemd[1]: sshd@16-10.200.20.41:22-10.200.16.10:50398.service: Deactivated successfully. Mar 7 01:31:07.710246 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:31:07.710882 systemd-logind[1711]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:31:07.711689 systemd-logind[1711]: Removed session 19. Mar 7 01:31:07.792222 systemd[1]: Started sshd@17-10.200.20.41:22-10.200.16.10:50400.service - OpenSSH per-connection server daemon (10.200.16.10:50400). Mar 7 01:31:08.277641 sshd[6608]: Accepted publickey for core from 10.200.16.10 port 50400 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:08.278702 sshd[6608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:08.282866 systemd-logind[1711]: New session 20 of user core. Mar 7 01:31:08.287127 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:31:08.829668 sshd[6608]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:08.834305 systemd[1]: sshd@17-10.200.20.41:22-10.200.16.10:50400.service: Deactivated successfully. Mar 7 01:31:08.836763 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:31:08.837873 systemd-logind[1711]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:31:08.839317 systemd-logind[1711]: Removed session 20. Mar 7 01:31:08.917947 systemd[1]: Started sshd@18-10.200.20.41:22-10.200.16.10:50410.service - OpenSSH per-connection server daemon (10.200.16.10:50410). Mar 7 01:31:09.412265 sshd[6621]: Accepted publickey for core from 10.200.16.10 port 50410 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:09.413673 sshd[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:09.417694 systemd-logind[1711]: New session 21 of user core. Mar 7 01:31:09.425217 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:31:10.518761 sshd[6621]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:10.522504 systemd[1]: sshd@18-10.200.20.41:22-10.200.16.10:50410.service: Deactivated successfully. Mar 7 01:31:10.524188 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:31:10.526176 systemd-logind[1711]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:31:10.527524 systemd-logind[1711]: Removed session 21. Mar 7 01:31:10.606887 systemd[1]: Started sshd@19-10.200.20.41:22-10.200.16.10:56258.service - OpenSSH per-connection server daemon (10.200.16.10:56258). Mar 7 01:31:11.104960 sshd[6655]: Accepted publickey for core from 10.200.16.10 port 56258 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:11.106767 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:11.111340 systemd-logind[1711]: New session 22 of user core. Mar 7 01:31:11.116876 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:31:11.641226 sshd[6655]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:11.644508 systemd[1]: sshd@19-10.200.20.41:22-10.200.16.10:56258.service: Deactivated successfully. Mar 7 01:31:11.646224 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:31:11.646874 systemd-logind[1711]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:31:11.648016 systemd-logind[1711]: Removed session 22. Mar 7 01:31:11.728948 systemd[1]: Started sshd@20-10.200.20.41:22-10.200.16.10:56260.service - OpenSSH per-connection server daemon (10.200.16.10:56260). Mar 7 01:31:12.230710 sshd[6665]: Accepted publickey for core from 10.200.16.10 port 56260 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:12.231560 sshd[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:12.235075 systemd-logind[1711]: New session 23 of user core. Mar 7 01:31:12.241196 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:31:12.641232 sshd[6665]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:12.644855 systemd-logind[1711]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:31:12.645213 systemd[1]: sshd@20-10.200.20.41:22-10.200.16.10:56260.service: Deactivated successfully. Mar 7 01:31:12.649514 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:31:12.650691 systemd-logind[1711]: Removed session 23. Mar 7 01:31:17.734205 systemd[1]: Started sshd@21-10.200.20.41:22-10.200.16.10:56268.service - OpenSSH per-connection server daemon (10.200.16.10:56268). Mar 7 01:31:18.222485 sshd[6700]: Accepted publickey for core from 10.200.16.10 port 56268 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:18.223901 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:18.228067 systemd-logind[1711]: New session 24 of user core. Mar 7 01:31:18.233126 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:31:18.633229 sshd[6700]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:18.636942 systemd-logind[1711]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:31:18.637285 systemd[1]: sshd@21-10.200.20.41:22-10.200.16.10:56268.service: Deactivated successfully. Mar 7 01:31:18.639171 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:31:18.640014 systemd-logind[1711]: Removed session 24. Mar 7 01:31:23.722393 systemd[1]: Started sshd@22-10.200.20.41:22-10.200.16.10:60026.service - OpenSSH per-connection server daemon (10.200.16.10:60026). Mar 7 01:31:24.210200 sshd[6732]: Accepted publickey for core from 10.200.16.10 port 60026 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:24.211055 sshd[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:24.214693 systemd-logind[1711]: New session 25 of user core. Mar 7 01:31:24.219166 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:31:24.616427 sshd[6732]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:24.620056 systemd[1]: sshd@22-10.200.20.41:22-10.200.16.10:60026.service: Deactivated successfully. Mar 7 01:31:24.622120 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:31:24.623404 systemd-logind[1711]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:31:24.624325 systemd-logind[1711]: Removed session 25. Mar 7 01:31:29.708256 systemd[1]: Started sshd@23-10.200.20.41:22-10.200.16.10:60032.service - OpenSSH per-connection server daemon (10.200.16.10:60032). Mar 7 01:31:30.194845 sshd[6763]: Accepted publickey for core from 10.200.16.10 port 60032 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:30.196960 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:30.201296 systemd-logind[1711]: New session 26 of user core. Mar 7 01:31:30.206153 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:31:30.603833 sshd[6763]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:30.607730 systemd-logind[1711]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:31:30.608426 systemd[1]: sshd@23-10.200.20.41:22-10.200.16.10:60032.service: Deactivated successfully. Mar 7 01:31:30.610732 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:31:30.611800 systemd-logind[1711]: Removed session 26. Mar 7 01:31:35.712242 systemd[1]: Started sshd@24-10.200.20.41:22-10.200.16.10:47048.service - OpenSSH per-connection server daemon (10.200.16.10:47048). Mar 7 01:31:36.195649 sshd[6808]: Accepted publickey for core from 10.200.16.10 port 47048 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:36.196492 sshd[6808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:36.205131 systemd-logind[1711]: New session 27 of user core. Mar 7 01:31:36.210149 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:31:36.602717 sshd[6808]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:36.607191 systemd[1]: sshd@24-10.200.20.41:22-10.200.16.10:47048.service: Deactivated successfully. Mar 7 01:31:36.608993 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:31:36.609693 systemd-logind[1711]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:31:36.610688 systemd-logind[1711]: Removed session 27. Mar 7 01:31:41.696225 systemd[1]: Started sshd@25-10.200.20.41:22-10.200.16.10:38222.service - OpenSSH per-connection server daemon (10.200.16.10:38222). Mar 7 01:31:42.186026 sshd[6823]: Accepted publickey for core from 10.200.16.10 port 38222 ssh2: RSA SHA256:Tb1i62CHrltPPFOOnyFLIDbf3+3IpEiMvqzbTlMp5Qo Mar 7 01:31:42.187426 sshd[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:42.191513 systemd-logind[1711]: New session 28 of user core. Mar 7 01:31:42.199139 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:31:42.597833 sshd[6823]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:42.601636 systemd[1]: sshd@25-10.200.20.41:22-10.200.16.10:38222.service: Deactivated successfully. Mar 7 01:31:42.603745 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:31:42.605204 systemd-logind[1711]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:31:42.606608 systemd-logind[1711]: Removed session 28.