Mar 4 00:42:49.200030 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:42:49.200052 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:42:49.200060 kernel: KASLR enabled Mar 4 00:42:49.200066 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:42:49.200074 kernel: printk: bootconsole [pl11] enabled Mar 4 00:42:49.200079 kernel: efi: EFI v2.7 by EDK II Mar 4 00:42:49.200087 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:42:49.200093 kernel: random: crng init done Mar 4 00:42:49.200099 kernel: ACPI: Early table checksum verification disabled Mar 4 00:42:49.200105 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:42:49.200111 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200117 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200124 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:42:49.200131 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200138 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200145 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200151 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200159 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200166 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200172 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:42:49.200179 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200185 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:42:49.200191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:42:49.200198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:42:49.200205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:42:49.200211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:42:49.200218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:42:49.200224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:42:49.200232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:42:49.200238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:42:49.200245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:42:49.200251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:42:49.200258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:42:49.200264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:42:49.200271 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:42:49.200277 kernel: Zone ranges: Mar 4 00:42:49.200283 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:42:49.200289 kernel: DMA32 empty Mar 4 00:42:49.200296 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:42:49.200302 kernel: Movable zone start for each node Mar 4 00:42:49.200313 kernel: Early memory node ranges Mar 4 00:42:49.200320 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:42:49.200327 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:42:49.200333 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:42:49.200340 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:42:49.200349 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:42:49.200356 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:42:49.200362 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:42:49.200369 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:42:49.200376 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:42:49.200383 kernel: psci: probing for conduit method from ACPI. Mar 4 00:42:49.200390 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:42:49.200396 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:42:49.200403 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:42:49.200410 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:42:49.200417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:42:49.200423 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:42:49.200432 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:42:49.200439 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:42:49.200446 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:42:49.200452 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:42:49.202491 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:42:49.202509 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:42:49.202516 kernel: CPU features: detected: Spectre-BHB Mar 4 00:42:49.202524 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:42:49.202531 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:42:49.202538 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:42:49.202545 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:42:49.202556 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:42:49.202563 kernel: alternatives: applying boot alternatives Mar 4 00:42:49.202572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:42:49.202580 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:42:49.202587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:42:49.202594 kernel: Fallback order for Node 0: 0 Mar 4 00:42:49.202601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:42:49.202608 kernel: Policy zone: Normal Mar 4 00:42:49.202615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:42:49.202621 kernel: software IO TLB: area num 2. Mar 4 00:42:49.202628 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:42:49.202637 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:42:49.202644 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:42:49.202651 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:42:49.202659 kernel: rcu: RCU event tracing is enabled. Mar 4 00:42:49.202666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:42:49.202673 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:42:49.202680 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:42:49.202687 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:42:49.202694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:42:49.202701 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:42:49.202708 kernel: GICv3: 960 SPIs implemented Mar 4 00:42:49.202716 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:42:49.202723 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:42:49.202730 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:42:49.202737 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:42:49.202744 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:42:49.202751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:42:49.202759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:42:49.202766 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:42:49.202773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:42:49.202780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:42:49.202787 kernel: Console: colour dummy device 80x25 Mar 4 00:42:49.202796 kernel: printk: console [tty1] enabled Mar 4 00:42:49.202803 kernel: ACPI: Core revision 20230628 Mar 4 00:42:49.202810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:42:49.202818 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:42:49.202825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:42:49.202832 kernel: landlock: Up and running. Mar 4 00:42:49.202839 kernel: SELinux: Initializing. Mar 4 00:42:49.202846 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.202853 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.202862 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:42:49.202869 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:42:49.202877 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:42:49.202884 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:42:49.202891 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:42:49.202898 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:42:49.202905 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:42:49.202912 kernel: Remapping and enabling EFI services. Mar 4 00:42:49.202926 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:42:49.202934 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:42:49.202942 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:42:49.202949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:42:49.202958 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:42:49.202965 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:42:49.202973 kernel: SMP: Total of 2 processors activated. Mar 4 00:42:49.202980 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:42:49.202988 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:42:49.202998 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:42:49.203006 kernel: CPU features: detected: CRC32 instructions Mar 4 00:42:49.203013 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:42:49.203021 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:42:49.203028 kernel: CPU features: detected: Privileged Access Never Mar 4 00:42:49.203036 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:42:49.203043 kernel: alternatives: applying system-wide alternatives Mar 4 00:42:49.203050 kernel: devtmpfs: initialized Mar 4 00:42:49.203058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:42:49.203067 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:42:49.203075 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:42:49.203082 kernel: SMBIOS 3.1.0 present. Mar 4 00:42:49.203090 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:42:49.203098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:42:49.203105 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:42:49.203113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:42:49.203120 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:42:49.203128 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:42:49.203137 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:42:49.203145 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:42:49.203152 kernel: cpuidle: using governor menu Mar 4 00:42:49.203160 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:42:49.203167 kernel: ASID allocator initialised with 32768 entries Mar 4 00:42:49.203175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:42:49.203182 kernel: Serial: AMBA PL011 UART driver Mar 4 00:42:49.203190 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:42:49.203197 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:42:49.203206 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:42:49.203214 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:42:49.203221 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:42:49.203229 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:42:49.203236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:42:49.203244 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:42:49.203251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:42:49.203258 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:42:49.203266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:42:49.203275 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:42:49.203282 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:42:49.203289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:42:49.203297 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:42:49.203304 kernel: ACPI: Interpreter enabled Mar 4 00:42:49.203312 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:42:49.203319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:42:49.203327 kernel: printk: console [ttyAMA0] enabled Mar 4 00:42:49.203334 kernel: printk: bootconsole [pl11] disabled Mar 4 00:42:49.203343 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:42:49.203351 kernel: iommu: Default domain type: Translated Mar 4 00:42:49.203359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:42:49.203366 kernel: efivars: Registered efivars operations Mar 4 00:42:49.203373 kernel: vgaarb: loaded Mar 4 00:42:49.203381 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:42:49.203388 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:42:49.203396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:42:49.203403 kernel: pnp: PnP ACPI init Mar 4 00:42:49.203412 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:42:49.203420 kernel: NET: Registered PF_INET protocol family Mar 4 00:42:49.203427 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:42:49.203435 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:42:49.203442 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:42:49.203450 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:42:49.205476 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:42:49.205501 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:42:49.205510 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.205522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.205530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:42:49.205538 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:42:49.205545 kernel: kvm [1]: HYP mode not available Mar 4 00:42:49.205553 kernel: Initialise system trusted keyrings Mar 4 00:42:49.205561 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:42:49.205568 kernel: Key type asymmetric registered Mar 4 00:42:49.205575 kernel: Asymmetric key parser 'x509' registered Mar 4 00:42:49.205583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:42:49.205592 kernel: io scheduler mq-deadline registered Mar 4 00:42:49.205600 kernel: io scheduler kyber registered Mar 4 00:42:49.205607 kernel: io scheduler bfq registered Mar 4 00:42:49.205615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:42:49.205623 kernel: thunder_xcv, ver 1.0 Mar 4 00:42:49.205631 kernel: thunder_bgx, ver 1.0 Mar 4 00:42:49.205638 kernel: nicpf, ver 1.0 Mar 4 00:42:49.205645 kernel: nicvf, ver 1.0 Mar 4 00:42:49.205776 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:42:49.205853 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:42:48 UTC (1772584968) Mar 4 00:42:49.205863 kernel: efifb: probing for efifb Mar 4 00:42:49.205871 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:42:49.205879 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:42:49.205886 kernel: efifb: scrolling: redraw Mar 4 00:42:49.205894 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:42:49.205902 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:42:49.205909 kernel: fb0: EFI VGA frame buffer device Mar 4 00:42:49.205919 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:42:49.205927 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:42:49.205935 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:42:49.205942 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:42:49.205950 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:42:49.205957 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:42:49.205965 kernel: Segment Routing with IPv6 Mar 4 00:42:49.205973 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:42:49.205980 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:42:49.205989 kernel: Key type dns_resolver registered Mar 4 00:42:49.205997 kernel: registered taskstats version 1 Mar 4 00:42:49.206004 kernel: Loading compiled-in X.509 certificates Mar 4 00:42:49.206012 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:42:49.206019 kernel: Key type .fscrypt registered Mar 4 00:42:49.206026 kernel: Key type fscrypt-provisioning registered Mar 4 00:42:49.206034 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:42:49.206041 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:42:49.206049 kernel: ima: No architecture policies found Mar 4 00:42:49.206058 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:42:49.206066 kernel: clk: Disabling unused clocks Mar 4 00:42:49.206073 kernel: Freeing unused kernel memory: 39424K Mar 4 00:42:49.206081 kernel: Run /init as init process Mar 4 00:42:49.206088 kernel: with arguments: Mar 4 00:42:49.206095 kernel: /init Mar 4 00:42:49.206103 kernel: with environment: Mar 4 00:42:49.206110 kernel: HOME=/ Mar 4 00:42:49.206117 kernel: TERM=linux Mar 4 00:42:49.206127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:42:49.206138 systemd[1]: Detected virtualization microsoft. Mar 4 00:42:49.206147 systemd[1]: Detected architecture arm64. Mar 4 00:42:49.206154 systemd[1]: Running in initrd. Mar 4 00:42:49.206162 systemd[1]: No hostname configured, using default hostname. Mar 4 00:42:49.206170 systemd[1]: Hostname set to . Mar 4 00:42:49.206178 systemd[1]: Initializing machine ID from random generator. Mar 4 00:42:49.206188 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:42:49.206196 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:42:49.206205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:42:49.206214 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:42:49.206222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:42:49.206230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:42:49.206239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:42:49.206248 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:42:49.206258 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:42:49.206266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:42:49.206274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:42:49.206282 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:42:49.206290 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:42:49.206298 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:42:49.206306 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:42:49.206314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:42:49.206323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:42:49.206332 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:42:49.206340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:42:49.206348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:42:49.206356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:42:49.206364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:42:49.206372 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:42:49.206381 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:42:49.206391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:42:49.206399 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:42:49.206407 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:42:49.206415 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:42:49.206440 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:42:49.206475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:42:49.206484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:49.206493 systemd-journald[217]: Journal started Mar 4 00:42:49.206513 systemd-journald[217]: Runtime Journal (/run/log/journal/42070f8474a8442ebc40fbc3b7569224) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:42:49.211737 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:42:49.234471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:42:49.247840 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:42:49.247886 kernel: Bridge firewalling registered Mar 4 00:42:49.247962 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:42:49.249824 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:42:49.257066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:42:49.267422 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:42:49.276204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:42:49.284331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:49.305777 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:49.313604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:42:49.329171 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:42:49.350595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:42:49.362778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:49.368784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:42:49.379248 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:42:49.388949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:42:49.407718 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:42:49.415642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:42:49.434595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:42:49.453750 dracut-cmdline[249]: dracut-dracut-053 Mar 4 00:42:49.464616 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:42:49.457859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:42:49.461723 systemd-resolved[251]: Positive Trust Anchors: Mar 4 00:42:49.461732 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:42:49.461764 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:42:49.463940 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 4 00:42:49.466618 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:42:49.503421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:42:49.617491 kernel: SCSI subsystem initialized Mar 4 00:42:49.624469 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:42:49.634477 kernel: iscsi: registered transport (tcp) Mar 4 00:42:49.650899 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:42:49.650915 kernel: QLogic iSCSI HBA Driver Mar 4 00:42:49.684648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:42:49.696715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:42:49.724643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:42:49.724710 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:42:49.729911 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:42:49.776490 kernel: raid6: neonx8 gen() 15799 MB/s Mar 4 00:42:49.795491 kernel: raid6: neonx4 gen() 15688 MB/s Mar 4 00:42:49.814474 kernel: raid6: neonx2 gen() 13240 MB/s Mar 4 00:42:49.834478 kernel: raid6: neonx1 gen() 10483 MB/s Mar 4 00:42:49.853470 kernel: raid6: int64x8 gen() 6977 MB/s Mar 4 00:42:49.872485 kernel: raid6: int64x4 gen() 7369 MB/s Mar 4 00:42:49.892477 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 00:42:49.914336 kernel: raid6: int64x1 gen() 5074 MB/s Mar 4 00:42:49.914397 kernel: raid6: using algorithm neonx8 gen() 15799 MB/s Mar 4 00:42:49.937303 kernel: raid6: .... xor() 12016 MB/s, rmw enabled Mar 4 00:42:49.937317 kernel: raid6: using neon recovery algorithm Mar 4 00:42:49.944468 kernel: xor: measuring software checksum speed Mar 4 00:42:49.950952 kernel: 8regs : 19066 MB/sec Mar 4 00:42:49.950975 kernel: 32regs : 19650 MB/sec Mar 4 00:42:49.953872 kernel: arm64_neon : 27043 MB/sec Mar 4 00:42:49.957250 kernel: xor: using function: arm64_neon (27043 MB/sec) Mar 4 00:42:50.006474 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:42:50.015735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:42:50.029584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:42:50.048546 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 4 00:42:50.052819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:42:50.068568 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:42:50.094642 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Mar 4 00:42:50.120586 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:42:50.136723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:42:50.173216 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:42:50.189621 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:42:50.214684 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:42:50.224204 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:42:50.240905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:42:50.257906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:42:50.280704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:42:50.299594 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:42:50.295193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:42:50.305343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:42:50.332533 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:42:50.332557 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:42:50.332567 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:42:50.332577 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:42:50.305512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:50.353739 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:42:50.353761 kernel: PTP clock support registered Mar 4 00:42:50.332553 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:50.376491 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:42:50.376524 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:42:50.343533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:50.643181 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:42:50.643209 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 00:42:50.643227 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 00:42:50.643237 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:42:50.643246 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:42:50.643394 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:42:50.643405 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:42:50.343755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.668679 kernel: scsi host0: storvsc_host_t Mar 4 00:42:50.671446 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:42:50.671480 kernel: scsi host1: storvsc_host_t Mar 4 00:42:50.671574 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:42:50.362612 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.636084 systemd-resolved[251]: Clock change detected. Flushing caches. Mar 4 00:42:50.637513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.670965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:50.671057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.714133 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: VF slot 1 added Mar 4 00:42:50.687615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.726266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.740050 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:42:50.753864 kernel: hv_pci e0334db2-fbf1-41c9-812e-aedc5b2a642d: PCI VMBus probing: Using version 0x10004 Mar 4 00:42:50.754090 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 00:42:50.754227 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:42:50.756054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:50.777417 kernel: hv_pci e0334db2-fbf1-41c9-812e-aedc5b2a642d: PCI host bridge to bus fbf1:00 Mar 4 00:42:50.777582 kernel: pci_bus fbf1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:42:50.778925 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:42:50.785973 kernel: pci_bus fbf1:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:42:50.798237 kernel: pci fbf1:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:42:50.792866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:50.810410 kernel: pci fbf1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:42:50.819699 kernel: pci fbf1:00:02.0: enabling Extended Tags Mar 4 00:42:50.819770 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:42:50.819936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:42:50.829327 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:42:50.833955 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 00:42:50.839913 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:42:50.840079 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:42:50.840193 kernel: pci fbf1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbf1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:42:50.858119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:50.858162 kernel: pci_bus fbf1:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:42:50.863121 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 00:42:50.863297 kernel: pci fbf1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:42:50.886291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#96 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:42:50.918191 kernel: mlx5_core fbf1:00:02.0: enabling device (0000 -> 0002) Mar 4 00:42:50.924120 kernel: mlx5_core fbf1:00:02.0: firmware version: 16.30.5026 Mar 4 00:42:51.118379 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: VF registering: eth1 Mar 4 00:42:51.118570 kernel: mlx5_core fbf1:00:02.0 eth1: joined to eth0 Mar 4 00:42:51.123952 kernel: mlx5_core fbf1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:42:51.135134 kernel: mlx5_core fbf1:00:02.0 enP64497s1: renamed from eth1 Mar 4 00:42:51.433318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:42:51.456065 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (493) Mar 4 00:42:51.470126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:42:51.480924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:42:51.497918 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (491) Mar 4 00:42:51.510235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:42:51.516417 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:42:51.541387 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:42:51.565229 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:51.574125 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:51.584132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:52.585185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:52.585239 disk-uuid[605]: The operation has completed successfully. Mar 4 00:42:52.642027 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:42:52.642134 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:42:52.679297 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:42:52.689957 sh[718]: Success Mar 4 00:42:52.719178 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:42:53.021381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:42:53.030899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:42:53.046201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:42:53.072558 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:42:53.072606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:53.078108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:42:53.082439 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:42:53.086311 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:42:53.426022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:42:53.430161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:42:53.447350 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:42:53.454317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:42:53.487824 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:53.487885 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:53.491648 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:53.534221 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:53.546466 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:42:53.556201 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:53.550446 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:42:53.569311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:42:53.576646 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:42:53.592329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:42:53.606001 systemd-networkd[899]: lo: Link UP Mar 4 00:42:53.606013 systemd-networkd[899]: lo: Gained carrier Mar 4 00:42:53.607645 systemd-networkd[899]: Enumeration completed Mar 4 00:42:53.609912 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:42:53.610491 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:42:53.610495 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:42:53.615342 systemd[1]: Reached target network.target - Network. Mar 4 00:42:53.692137 kernel: mlx5_core fbf1:00:02.0 enP64497s1: Link up Mar 4 00:42:53.731134 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: Data path switched to VF: enP64497s1 Mar 4 00:42:53.730773 systemd-networkd[899]: enP64497s1: Link UP Mar 4 00:42:53.730863 systemd-networkd[899]: eth0: Link UP Mar 4 00:42:53.730983 systemd-networkd[899]: eth0: Gained carrier Mar 4 00:42:53.730991 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:42:53.737520 systemd-networkd[899]: enP64497s1: Gained carrier Mar 4 00:42:53.759150 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:42:54.635771 ignition[902]: Ignition 2.19.0 Mar 4 00:42:54.635780 ignition[902]: Stage: fetch-offline Mar 4 00:42:54.639763 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:42:54.635815 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.635823 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.635914 ignition[902]: parsed url from cmdline: "" Mar 4 00:42:54.635917 ignition[902]: no config URL provided Mar 4 00:42:54.635921 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:42:54.635927 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:42:54.664395 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:42:54.635934 ignition[902]: failed to fetch config: resource requires networking Mar 4 00:42:54.636143 ignition[902]: Ignition finished successfully Mar 4 00:42:54.684983 ignition[913]: Ignition 2.19.0 Mar 4 00:42:54.684989 ignition[913]: Stage: fetch Mar 4 00:42:54.685859 ignition[913]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.685879 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.686010 ignition[913]: parsed url from cmdline: "" Mar 4 00:42:54.686015 ignition[913]: no config URL provided Mar 4 00:42:54.686019 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:42:54.686027 ignition[913]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:42:54.686050 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:42:54.770714 ignition[913]: GET result: OK Mar 4 00:42:54.770839 ignition[913]: config has been read from IMDS userdata Mar 4 00:42:54.770885 ignition[913]: parsing config with SHA512: 8ceaca32b995bde54eac9a56bf097cc5d3648d4c95999c2e18ab917aedf2204cae718e24fd2633984150fffd2920940a11211ee1c40892a48535585d5217df5a Mar 4 00:42:54.775979 unknown[913]: fetched base config from "system" Mar 4 00:42:54.776392 ignition[913]: fetch: fetch complete Mar 4 00:42:54.775987 unknown[913]: fetched base config from "system" Mar 4 00:42:54.776396 ignition[913]: fetch: fetch passed Mar 4 00:42:54.775993 unknown[913]: fetched user config from "azure" Mar 4 00:42:54.776442 ignition[913]: Ignition finished successfully Mar 4 00:42:54.778688 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:42:54.799238 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:42:54.817186 ignition[919]: Ignition 2.19.0 Mar 4 00:42:54.817203 ignition[919]: Stage: kargs Mar 4 00:42:54.821213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:42:54.817393 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.817402 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.818293 ignition[919]: kargs: kargs passed Mar 4 00:42:54.818352 ignition[919]: Ignition finished successfully Mar 4 00:42:54.844441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:42:54.861819 ignition[926]: Ignition 2.19.0 Mar 4 00:42:54.861832 ignition[926]: Stage: disks Mar 4 00:42:54.867315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:42:54.862048 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.872639 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:42:54.862057 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.881800 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:42:54.864346 ignition[926]: disks: disks passed Mar 4 00:42:54.885439 systemd-networkd[899]: eth0: Gained IPv6LL Mar 4 00:42:54.864399 ignition[926]: Ignition finished successfully Mar 4 00:42:54.891740 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:42:54.900823 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:42:54.909790 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:42:54.929348 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:42:55.023615 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:42:55.032974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:42:55.049328 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:42:55.101120 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:42:55.101254 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:42:55.105505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:42:55.161175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:42:55.182120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 4 00:42:55.192635 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:55.192652 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:55.197126 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:55.206133 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:55.206283 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:42:55.215750 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:42:55.226200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:42:55.226233 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:42:55.233216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:42:55.245391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:42:55.267375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:42:55.988305 coreos-metadata[962]: Mar 04 00:42:55.988 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:42:55.996594 coreos-metadata[962]: Mar 04 00:42:55.996 INFO Fetch successful Mar 4 00:42:56.001454 coreos-metadata[962]: Mar 04 00:42:56.001 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:42:56.021212 coreos-metadata[962]: Mar 04 00:42:56.021 INFO Fetch successful Mar 4 00:42:56.026442 coreos-metadata[962]: Mar 04 00:42:56.023 INFO wrote hostname ci-4081.3.6-n-d3c3414975 to /sysroot/etc/hostname Mar 4 00:42:56.026189 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:42:56.141160 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:42:56.180661 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:42:56.187068 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:42:56.194678 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:42:57.098913 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:42:57.111299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:42:57.117573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:42:57.137622 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:42:57.141926 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:57.159762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:42:57.173809 ignition[1064]: INFO : Ignition 2.19.0 Mar 4 00:42:57.173809 ignition[1064]: INFO : Stage: mount Mar 4 00:42:57.183173 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:57.183173 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:57.183173 ignition[1064]: INFO : mount: mount passed Mar 4 00:42:57.183173 ignition[1064]: INFO : Ignition finished successfully Mar 4 00:42:57.178583 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:42:57.197341 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:42:57.216334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:42:57.249119 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 4 00:42:57.259830 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:57.259856 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:57.263183 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:57.271083 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:57.271711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:42:57.301726 ignition[1092]: INFO : Ignition 2.19.0 Mar 4 00:42:57.301726 ignition[1092]: INFO : Stage: files Mar 4 00:42:57.308216 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:57.308216 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:57.308216 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:42:57.322840 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:42:57.322840 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:42:57.468494 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:42:57.474691 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:42:57.474691 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:42:57.468882 unknown[1092]: wrote ssh authorized keys file for user: core Mar 4 00:42:57.490338 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:42:57.490338 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:42:57.530365 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:42:57.659233 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 00:42:58.156851 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 00:42:58.705440 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:58.705440 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 00:42:58.723678 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: files passed Mar 4 00:42:58.737348 ignition[1092]: INFO : Ignition finished successfully Mar 4 00:42:58.732600 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:42:58.759362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:42:58.771535 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:42:58.782475 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:42:58.839815 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.839815 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.784135 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:42:58.858803 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.817965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:42:58.825407 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:42:58.859363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:42:58.902493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:42:58.902625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:42:58.912066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:42:58.922262 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:42:58.931313 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:42:58.947615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:42:58.968569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:42:58.982357 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:42:58.998642 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:42:59.004314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:42:59.014339 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:42:59.023262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:42:59.023424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:42:59.036525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:42:59.046304 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:42:59.054599 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:42:59.063461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:42:59.073341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:42:59.083029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:42:59.092391 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:42:59.101852 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:42:59.111698 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:42:59.120361 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:42:59.127960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:42:59.128140 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:42:59.140121 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:42:59.149311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:42:59.158732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:42:59.158834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:42:59.169239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:42:59.169396 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:42:59.183919 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:42:59.184085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:42:59.193494 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:42:59.193643 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:42:59.202287 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:42:59.202426 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:42:59.225207 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:42:59.242434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:42:59.261631 ignition[1145]: INFO : Ignition 2.19.0 Mar 4 00:42:59.261631 ignition[1145]: INFO : Stage: umount Mar 4 00:42:59.261631 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:59.261631 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:59.261631 ignition[1145]: INFO : umount: umount passed Mar 4 00:42:59.261631 ignition[1145]: INFO : Ignition finished successfully Mar 4 00:42:59.251955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:42:59.252136 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:42:59.258446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:42:59.258590 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:42:59.280765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:42:59.281665 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:42:59.283238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:42:59.296171 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:42:59.296267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:42:59.315767 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:42:59.315815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:42:59.324028 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:42:59.324069 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:42:59.332848 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:42:59.332885 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:42:59.342223 systemd[1]: Stopped target network.target - Network. Mar 4 00:42:59.350859 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:42:59.350926 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:42:59.360691 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:42:59.368639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:42:59.374147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:42:59.383341 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:42:59.393227 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:42:59.401655 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:42:59.401707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:42:59.410269 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:42:59.410316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:42:59.418533 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:42:59.418580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:42:59.426912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:42:59.426953 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:42:59.435132 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:42:59.444880 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:42:59.459150 systemd-networkd[899]: eth0: DHCPv6 lease lost Mar 4 00:42:59.463552 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:42:59.463670 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:42:59.469441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:42:59.612787 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: Data path switched from VF: enP64497s1 Mar 4 00:42:59.469478 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:42:59.495462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:42:59.502509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:42:59.502587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:42:59.512031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:42:59.525407 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:42:59.525516 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:42:59.546564 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:42:59.546760 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:42:59.557046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:42:59.557289 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:42:59.565235 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:42:59.565274 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:42:59.574553 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:42:59.574603 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:42:59.586727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:42:59.586773 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:42:59.596273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:42:59.596345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:59.626355 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:42:59.643000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:42:59.643074 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:42:59.647987 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:42:59.648029 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:42:59.657662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:42:59.657711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:42:59.667734 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:42:59.667788 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:42:59.677045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:42:59.677164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:42:59.688430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:42:59.688479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:42:59.698393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:59.698439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:59.708238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:42:59.708358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:42:59.726535 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:42:59.728131 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:42:59.895169 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:42:59.895293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:42:59.903888 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:42:59.909223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:42:59.909286 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:42:59.932336 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:42:59.946384 systemd[1]: Switching root. Mar 4 00:43:00.041989 systemd-journald[217]: Journal stopped Mar 4 00:42:49.200030 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 00:42:49.200052 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 00:42:49.200060 kernel: KASLR enabled Mar 4 00:42:49.200066 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 00:42:49.200074 kernel: printk: bootconsole [pl11] enabled Mar 4 00:42:49.200079 kernel: efi: EFI v2.7 by EDK II Mar 4 00:42:49.200087 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 00:42:49.200093 kernel: random: crng init done Mar 4 00:42:49.200099 kernel: ACPI: Early table checksum verification disabled Mar 4 00:42:49.200105 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 00:42:49.200111 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200117 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200124 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 00:42:49.200131 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200138 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200145 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200151 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200159 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200166 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200172 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 00:42:49.200179 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 00:42:49.200185 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 00:42:49.200191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 00:42:49.200198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 00:42:49.200205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 00:42:49.200211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 00:42:49.200218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 00:42:49.200224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 00:42:49.200232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 00:42:49.200238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 00:42:49.200245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 00:42:49.200251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 00:42:49.200258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 00:42:49.200264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 00:42:49.200271 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 00:42:49.200277 kernel: Zone ranges: Mar 4 00:42:49.200283 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 00:42:49.200289 kernel: DMA32 empty Mar 4 00:42:49.200296 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:42:49.200302 kernel: Movable zone start for each node Mar 4 00:42:49.200313 kernel: Early memory node ranges Mar 4 00:42:49.200320 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 00:42:49.200327 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 00:42:49.200333 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 00:42:49.200340 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 00:42:49.200349 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 00:42:49.200356 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 00:42:49.200362 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 00:42:49.200369 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 00:42:49.200376 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 00:42:49.200383 kernel: psci: probing for conduit method from ACPI. Mar 4 00:42:49.200390 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 00:42:49.200396 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 00:42:49.200403 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 00:42:49.200410 kernel: psci: SMC Calling Convention v1.4 Mar 4 00:42:49.200417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 00:42:49.200423 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 00:42:49.200432 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 00:42:49.200439 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 00:42:49.200446 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 00:42:49.200452 kernel: Detected PIPT I-cache on CPU0 Mar 4 00:42:49.202491 kernel: CPU features: detected: GIC system register CPU interface Mar 4 00:42:49.202509 kernel: CPU features: detected: Hardware dirty bit management Mar 4 00:42:49.202516 kernel: CPU features: detected: Spectre-BHB Mar 4 00:42:49.202524 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 00:42:49.202531 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 00:42:49.202538 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 00:42:49.202545 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 00:42:49.202556 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 00:42:49.202563 kernel: alternatives: applying boot alternatives Mar 4 00:42:49.202572 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:42:49.202580 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:42:49.202587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:42:49.202594 kernel: Fallback order for Node 0: 0 Mar 4 00:42:49.202601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 00:42:49.202608 kernel: Policy zone: Normal Mar 4 00:42:49.202615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:42:49.202621 kernel: software IO TLB: area num 2. Mar 4 00:42:49.202628 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 00:42:49.202637 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 00:42:49.202644 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 00:42:49.202651 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:42:49.202659 kernel: rcu: RCU event tracing is enabled. Mar 4 00:42:49.202666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 00:42:49.202673 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:42:49.202680 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:42:49.202687 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:42:49.202694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 00:42:49.202701 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 00:42:49.202708 kernel: GICv3: 960 SPIs implemented Mar 4 00:42:49.202716 kernel: GICv3: 0 Extended SPIs implemented Mar 4 00:42:49.202723 kernel: Root IRQ handler: gic_handle_irq Mar 4 00:42:49.202730 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 00:42:49.202737 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 00:42:49.202744 kernel: ITS: No ITS available, not enabling LPIs Mar 4 00:42:49.202751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:42:49.202759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:42:49.202766 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 00:42:49.202773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 00:42:49.202780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 00:42:49.202787 kernel: Console: colour dummy device 80x25 Mar 4 00:42:49.202796 kernel: printk: console [tty1] enabled Mar 4 00:42:49.202803 kernel: ACPI: Core revision 20230628 Mar 4 00:42:49.202810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 00:42:49.202818 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:42:49.202825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:42:49.202832 kernel: landlock: Up and running. Mar 4 00:42:49.202839 kernel: SELinux: Initializing. Mar 4 00:42:49.202846 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.202853 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.202862 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:42:49.202869 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 00:42:49.202877 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 00:42:49.202884 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 00:42:49.202891 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 00:42:49.202898 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:42:49.202905 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:42:49.202912 kernel: Remapping and enabling EFI services. Mar 4 00:42:49.202926 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:42:49.202934 kernel: Detected PIPT I-cache on CPU1 Mar 4 00:42:49.202942 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 00:42:49.202949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 00:42:49.202958 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 00:42:49.202965 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 00:42:49.202973 kernel: SMP: Total of 2 processors activated. Mar 4 00:42:49.202980 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 00:42:49.202988 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 00:42:49.202998 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 00:42:49.203006 kernel: CPU features: detected: CRC32 instructions Mar 4 00:42:49.203013 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 00:42:49.203021 kernel: CPU features: detected: LSE atomic instructions Mar 4 00:42:49.203028 kernel: CPU features: detected: Privileged Access Never Mar 4 00:42:49.203036 kernel: CPU: All CPU(s) started at EL1 Mar 4 00:42:49.203043 kernel: alternatives: applying system-wide alternatives Mar 4 00:42:49.203050 kernel: devtmpfs: initialized Mar 4 00:42:49.203058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:42:49.203067 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 00:42:49.203075 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:42:49.203082 kernel: SMBIOS 3.1.0 present. Mar 4 00:42:49.203090 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 00:42:49.203098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:42:49.203105 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 00:42:49.203113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 00:42:49.203120 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 00:42:49.203128 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:42:49.203137 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 00:42:49.203145 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:42:49.203152 kernel: cpuidle: using governor menu Mar 4 00:42:49.203160 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 00:42:49.203167 kernel: ASID allocator initialised with 32768 entries Mar 4 00:42:49.203175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:42:49.203182 kernel: Serial: AMBA PL011 UART driver Mar 4 00:42:49.203190 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 00:42:49.203197 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 00:42:49.203206 kernel: Modules: 509008 pages in range for PLT usage Mar 4 00:42:49.203214 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:42:49.203221 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:42:49.203229 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 00:42:49.203236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 00:42:49.203244 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:42:49.203251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:42:49.203258 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 00:42:49.203266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 00:42:49.203275 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:42:49.203282 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:42:49.203289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:42:49.203297 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:42:49.203304 kernel: ACPI: Interpreter enabled Mar 4 00:42:49.203312 kernel: ACPI: Using GIC for interrupt routing Mar 4 00:42:49.203319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 00:42:49.203327 kernel: printk: console [ttyAMA0] enabled Mar 4 00:42:49.203334 kernel: printk: bootconsole [pl11] disabled Mar 4 00:42:49.203343 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 00:42:49.203351 kernel: iommu: Default domain type: Translated Mar 4 00:42:49.203359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 00:42:49.203366 kernel: efivars: Registered efivars operations Mar 4 00:42:49.203373 kernel: vgaarb: loaded Mar 4 00:42:49.203381 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 00:42:49.203388 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:42:49.203396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:42:49.203403 kernel: pnp: PnP ACPI init Mar 4 00:42:49.203412 kernel: pnp: PnP ACPI: found 0 devices Mar 4 00:42:49.203420 kernel: NET: Registered PF_INET protocol family Mar 4 00:42:49.203427 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:42:49.203435 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:42:49.203442 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:42:49.203450 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:42:49.205476 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:42:49.205501 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:42:49.205510 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.205522 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:42:49.205530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:42:49.205538 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:42:49.205545 kernel: kvm [1]: HYP mode not available Mar 4 00:42:49.205553 kernel: Initialise system trusted keyrings Mar 4 00:42:49.205561 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:42:49.205568 kernel: Key type asymmetric registered Mar 4 00:42:49.205575 kernel: Asymmetric key parser 'x509' registered Mar 4 00:42:49.205583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 00:42:49.205592 kernel: io scheduler mq-deadline registered Mar 4 00:42:49.205600 kernel: io scheduler kyber registered Mar 4 00:42:49.205607 kernel: io scheduler bfq registered Mar 4 00:42:49.205615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:42:49.205623 kernel: thunder_xcv, ver 1.0 Mar 4 00:42:49.205631 kernel: thunder_bgx, ver 1.0 Mar 4 00:42:49.205638 kernel: nicpf, ver 1.0 Mar 4 00:42:49.205645 kernel: nicvf, ver 1.0 Mar 4 00:42:49.205776 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 00:42:49.205853 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T00:42:48 UTC (1772584968) Mar 4 00:42:49.205863 kernel: efifb: probing for efifb Mar 4 00:42:49.205871 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 00:42:49.205879 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 00:42:49.205886 kernel: efifb: scrolling: redraw Mar 4 00:42:49.205894 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 00:42:49.205902 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:42:49.205909 kernel: fb0: EFI VGA frame buffer device Mar 4 00:42:49.205919 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 00:42:49.205927 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 00:42:49.205935 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 00:42:49.205942 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 00:42:49.205950 kernel: watchdog: Hard watchdog permanently disabled Mar 4 00:42:49.205957 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:42:49.205965 kernel: Segment Routing with IPv6 Mar 4 00:42:49.205973 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:42:49.205980 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:42:49.205989 kernel: Key type dns_resolver registered Mar 4 00:42:49.205997 kernel: registered taskstats version 1 Mar 4 00:42:49.206004 kernel: Loading compiled-in X.509 certificates Mar 4 00:42:49.206012 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 00:42:49.206019 kernel: Key type .fscrypt registered Mar 4 00:42:49.206026 kernel: Key type fscrypt-provisioning registered Mar 4 00:42:49.206034 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:42:49.206041 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:42:49.206049 kernel: ima: No architecture policies found Mar 4 00:42:49.206058 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 00:42:49.206066 kernel: clk: Disabling unused clocks Mar 4 00:42:49.206073 kernel: Freeing unused kernel memory: 39424K Mar 4 00:42:49.206081 kernel: Run /init as init process Mar 4 00:42:49.206088 kernel: with arguments: Mar 4 00:42:49.206095 kernel: /init Mar 4 00:42:49.206103 kernel: with environment: Mar 4 00:42:49.206110 kernel: HOME=/ Mar 4 00:42:49.206117 kernel: TERM=linux Mar 4 00:42:49.206127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:42:49.206138 systemd[1]: Detected virtualization microsoft. Mar 4 00:42:49.206147 systemd[1]: Detected architecture arm64. Mar 4 00:42:49.206154 systemd[1]: Running in initrd. Mar 4 00:42:49.206162 systemd[1]: No hostname configured, using default hostname. Mar 4 00:42:49.206170 systemd[1]: Hostname set to . Mar 4 00:42:49.206178 systemd[1]: Initializing machine ID from random generator. Mar 4 00:42:49.206188 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:42:49.206196 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:42:49.206205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:42:49.206214 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:42:49.206222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:42:49.206230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:42:49.206239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:42:49.206248 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:42:49.206258 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:42:49.206266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:42:49.206274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:42:49.206282 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:42:49.206290 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:42:49.206298 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:42:49.206306 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:42:49.206314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:42:49.206323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:42:49.206332 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:42:49.206340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:42:49.206348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:42:49.206356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:42:49.206364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:42:49.206372 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:42:49.206381 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:42:49.206391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:42:49.206399 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:42:49.206407 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:42:49.206415 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:42:49.206440 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 00:42:49.206475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:42:49.206484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:49.206493 systemd-journald[217]: Journal started Mar 4 00:42:49.206513 systemd-journald[217]: Runtime Journal (/run/log/journal/42070f8474a8442ebc40fbc3b7569224) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:42:49.211737 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 00:42:49.234471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:42:49.247840 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:42:49.247886 kernel: Bridge firewalling registered Mar 4 00:42:49.247962 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 00:42:49.249824 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:42:49.257066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:42:49.267422 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:42:49.276204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:42:49.284331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:49.305777 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:49.313604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:42:49.329171 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:42:49.350595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:42:49.362778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:49.368784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:42:49.379248 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:42:49.388949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:42:49.407718 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:42:49.415642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:42:49.434595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:42:49.453750 dracut-cmdline[249]: dracut-dracut-053 Mar 4 00:42:49.464616 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 00:42:49.457859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:42:49.461723 systemd-resolved[251]: Positive Trust Anchors: Mar 4 00:42:49.461732 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:42:49.461764 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:42:49.463940 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 4 00:42:49.466618 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:42:49.503421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:42:49.617491 kernel: SCSI subsystem initialized Mar 4 00:42:49.624469 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:42:49.634477 kernel: iscsi: registered transport (tcp) Mar 4 00:42:49.650899 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:42:49.650915 kernel: QLogic iSCSI HBA Driver Mar 4 00:42:49.684648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:42:49.696715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:42:49.724643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:42:49.724710 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:42:49.729911 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:42:49.776490 kernel: raid6: neonx8 gen() 15799 MB/s Mar 4 00:42:49.795491 kernel: raid6: neonx4 gen() 15688 MB/s Mar 4 00:42:49.814474 kernel: raid6: neonx2 gen() 13240 MB/s Mar 4 00:42:49.834478 kernel: raid6: neonx1 gen() 10483 MB/s Mar 4 00:42:49.853470 kernel: raid6: int64x8 gen() 6977 MB/s Mar 4 00:42:49.872485 kernel: raid6: int64x4 gen() 7369 MB/s Mar 4 00:42:49.892477 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 00:42:49.914336 kernel: raid6: int64x1 gen() 5074 MB/s Mar 4 00:42:49.914397 kernel: raid6: using algorithm neonx8 gen() 15799 MB/s Mar 4 00:42:49.937303 kernel: raid6: .... xor() 12016 MB/s, rmw enabled Mar 4 00:42:49.937317 kernel: raid6: using neon recovery algorithm Mar 4 00:42:49.944468 kernel: xor: measuring software checksum speed Mar 4 00:42:49.950952 kernel: 8regs : 19066 MB/sec Mar 4 00:42:49.950975 kernel: 32regs : 19650 MB/sec Mar 4 00:42:49.953872 kernel: arm64_neon : 27043 MB/sec Mar 4 00:42:49.957250 kernel: xor: using function: arm64_neon (27043 MB/sec) Mar 4 00:42:50.006474 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:42:50.015735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:42:50.029584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:42:50.048546 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 4 00:42:50.052819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:42:50.068568 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:42:50.094642 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Mar 4 00:42:50.120586 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:42:50.136723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:42:50.173216 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:42:50.189621 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:42:50.214684 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:42:50.224204 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:42:50.240905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:42:50.257906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:42:50.280704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:42:50.299594 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 00:42:50.295193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:42:50.305343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:42:50.332533 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 00:42:50.332557 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 00:42:50.332567 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 00:42:50.332577 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 00:42:50.305512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:50.353739 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 00:42:50.353761 kernel: PTP clock support registered Mar 4 00:42:50.332553 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:50.376491 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 00:42:50.376524 kernel: hv_vmbus: registering driver hv_utils Mar 4 00:42:50.343533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:50.643181 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 00:42:50.643209 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 00:42:50.643227 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 00:42:50.643237 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 00:42:50.643246 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 00:42:50.643394 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 00:42:50.643405 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 00:42:50.343755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.668679 kernel: scsi host0: storvsc_host_t Mar 4 00:42:50.671446 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 00:42:50.671480 kernel: scsi host1: storvsc_host_t Mar 4 00:42:50.671574 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 00:42:50.362612 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.636084 systemd-resolved[251]: Clock change detected. Flushing caches. Mar 4 00:42:50.637513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.670965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:50.671057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.714133 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: VF slot 1 added Mar 4 00:42:50.687615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:42:50.726266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:50.740050 kernel: hv_vmbus: registering driver hv_pci Mar 4 00:42:50.753864 kernel: hv_pci e0334db2-fbf1-41c9-812e-aedc5b2a642d: PCI VMBus probing: Using version 0x10004 Mar 4 00:42:50.754090 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 00:42:50.754227 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:42:50.756054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:42:50.777417 kernel: hv_pci e0334db2-fbf1-41c9-812e-aedc5b2a642d: PCI host bridge to bus fbf1:00 Mar 4 00:42:50.777582 kernel: pci_bus fbf1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 00:42:50.778925 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 00:42:50.785973 kernel: pci_bus fbf1:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 00:42:50.798237 kernel: pci fbf1:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 00:42:50.792866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:50.810410 kernel: pci fbf1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:42:50.819699 kernel: pci fbf1:00:02.0: enabling Extended Tags Mar 4 00:42:50.819770 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 00:42:50.819936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:42:50.829327 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 00:42:50.833955 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 00:42:50.839913 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 00:42:50.840079 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 00:42:50.840193 kernel: pci fbf1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbf1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 00:42:50.858119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:50.858162 kernel: pci_bus fbf1:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 00:42:50.863121 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 00:42:50.863297 kernel: pci fbf1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 00:42:50.886291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#96 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:42:50.918191 kernel: mlx5_core fbf1:00:02.0: enabling device (0000 -> 0002) Mar 4 00:42:50.924120 kernel: mlx5_core fbf1:00:02.0: firmware version: 16.30.5026 Mar 4 00:42:51.118379 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: VF registering: eth1 Mar 4 00:42:51.118570 kernel: mlx5_core fbf1:00:02.0 eth1: joined to eth0 Mar 4 00:42:51.123952 kernel: mlx5_core fbf1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 00:42:51.135134 kernel: mlx5_core fbf1:00:02.0 enP64497s1: renamed from eth1 Mar 4 00:42:51.433318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 00:42:51.456065 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (493) Mar 4 00:42:51.470126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:42:51.480924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 00:42:51.497918 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (491) Mar 4 00:42:51.510235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 00:42:51.516417 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 00:42:51.541387 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:42:51.565229 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:51.574125 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:51.584132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:52.585185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 00:42:52.585239 disk-uuid[605]: The operation has completed successfully. Mar 4 00:42:52.642027 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:42:52.642134 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:42:52.679297 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:42:52.689957 sh[718]: Success Mar 4 00:42:52.719178 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 00:42:53.021381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:42:53.030899 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:42:53.046201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:42:53.072558 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 00:42:53.072606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:53.078108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:42:53.082439 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:42:53.086311 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:42:53.426022 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:42:53.430161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:42:53.447350 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:42:53.454317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:42:53.487824 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:53.487885 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:53.491648 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:53.534221 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:53.546466 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:42:53.556201 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:53.550446 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:42:53.569311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:42:53.576646 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:42:53.592329 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:42:53.606001 systemd-networkd[899]: lo: Link UP Mar 4 00:42:53.606013 systemd-networkd[899]: lo: Gained carrier Mar 4 00:42:53.607645 systemd-networkd[899]: Enumeration completed Mar 4 00:42:53.609912 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:42:53.610491 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:42:53.610495 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:42:53.615342 systemd[1]: Reached target network.target - Network. Mar 4 00:42:53.692137 kernel: mlx5_core fbf1:00:02.0 enP64497s1: Link up Mar 4 00:42:53.731134 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: Data path switched to VF: enP64497s1 Mar 4 00:42:53.730773 systemd-networkd[899]: enP64497s1: Link UP Mar 4 00:42:53.730863 systemd-networkd[899]: eth0: Link UP Mar 4 00:42:53.730983 systemd-networkd[899]: eth0: Gained carrier Mar 4 00:42:53.730991 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:42:53.737520 systemd-networkd[899]: enP64497s1: Gained carrier Mar 4 00:42:53.759150 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:42:54.635771 ignition[902]: Ignition 2.19.0 Mar 4 00:42:54.635780 ignition[902]: Stage: fetch-offline Mar 4 00:42:54.639763 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:42:54.635815 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.635823 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.635914 ignition[902]: parsed url from cmdline: "" Mar 4 00:42:54.635917 ignition[902]: no config URL provided Mar 4 00:42:54.635921 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:42:54.635927 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:42:54.664395 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 00:42:54.635934 ignition[902]: failed to fetch config: resource requires networking Mar 4 00:42:54.636143 ignition[902]: Ignition finished successfully Mar 4 00:42:54.684983 ignition[913]: Ignition 2.19.0 Mar 4 00:42:54.684989 ignition[913]: Stage: fetch Mar 4 00:42:54.685859 ignition[913]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.685879 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.686010 ignition[913]: parsed url from cmdline: "" Mar 4 00:42:54.686015 ignition[913]: no config URL provided Mar 4 00:42:54.686019 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:42:54.686027 ignition[913]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:42:54.686050 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 00:42:54.770714 ignition[913]: GET result: OK Mar 4 00:42:54.770839 ignition[913]: config has been read from IMDS userdata Mar 4 00:42:54.770885 ignition[913]: parsing config with SHA512: 8ceaca32b995bde54eac9a56bf097cc5d3648d4c95999c2e18ab917aedf2204cae718e24fd2633984150fffd2920940a11211ee1c40892a48535585d5217df5a Mar 4 00:42:54.775979 unknown[913]: fetched base config from "system" Mar 4 00:42:54.776392 ignition[913]: fetch: fetch complete Mar 4 00:42:54.775987 unknown[913]: fetched base config from "system" Mar 4 00:42:54.776396 ignition[913]: fetch: fetch passed Mar 4 00:42:54.775993 unknown[913]: fetched user config from "azure" Mar 4 00:42:54.776442 ignition[913]: Ignition finished successfully Mar 4 00:42:54.778688 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 00:42:54.799238 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:42:54.817186 ignition[919]: Ignition 2.19.0 Mar 4 00:42:54.817203 ignition[919]: Stage: kargs Mar 4 00:42:54.821213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:42:54.817393 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.817402 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.818293 ignition[919]: kargs: kargs passed Mar 4 00:42:54.818352 ignition[919]: Ignition finished successfully Mar 4 00:42:54.844441 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:42:54.861819 ignition[926]: Ignition 2.19.0 Mar 4 00:42:54.861832 ignition[926]: Stage: disks Mar 4 00:42:54.867315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:42:54.862048 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:54.872639 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:42:54.862057 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:54.881800 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:42:54.864346 ignition[926]: disks: disks passed Mar 4 00:42:54.885439 systemd-networkd[899]: eth0: Gained IPv6LL Mar 4 00:42:54.864399 ignition[926]: Ignition finished successfully Mar 4 00:42:54.891740 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:42:54.900823 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:42:54.909790 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:42:54.929348 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:42:55.023615 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 00:42:55.032974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:42:55.049328 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:42:55.101120 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 00:42:55.101254 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:42:55.105505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:42:55.161175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:42:55.182120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 4 00:42:55.192635 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:55.192652 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:55.197126 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:55.206133 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:55.206283 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:42:55.215750 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 00:42:55.226200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:42:55.226233 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:42:55.233216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:42:55.245391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:42:55.267375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:42:55.988305 coreos-metadata[962]: Mar 04 00:42:55.988 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:42:55.996594 coreos-metadata[962]: Mar 04 00:42:55.996 INFO Fetch successful Mar 4 00:42:56.001454 coreos-metadata[962]: Mar 04 00:42:56.001 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:42:56.021212 coreos-metadata[962]: Mar 04 00:42:56.021 INFO Fetch successful Mar 4 00:42:56.026442 coreos-metadata[962]: Mar 04 00:42:56.023 INFO wrote hostname ci-4081.3.6-n-d3c3414975 to /sysroot/etc/hostname Mar 4 00:42:56.026189 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:42:56.141160 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:42:56.180661 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:42:56.187068 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:42:56.194678 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:42:57.098913 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:42:57.111299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:42:57.117573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:42:57.137622 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:42:57.141926 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:57.159762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:42:57.173809 ignition[1064]: INFO : Ignition 2.19.0 Mar 4 00:42:57.173809 ignition[1064]: INFO : Stage: mount Mar 4 00:42:57.183173 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:57.183173 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:57.183173 ignition[1064]: INFO : mount: mount passed Mar 4 00:42:57.183173 ignition[1064]: INFO : Ignition finished successfully Mar 4 00:42:57.178583 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:42:57.197341 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:42:57.216334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:42:57.249119 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 4 00:42:57.259830 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 00:42:57.259856 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 00:42:57.263183 kernel: BTRFS info (device sda6): using free space tree Mar 4 00:42:57.271083 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 00:42:57.271711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:42:57.301726 ignition[1092]: INFO : Ignition 2.19.0 Mar 4 00:42:57.301726 ignition[1092]: INFO : Stage: files Mar 4 00:42:57.308216 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:57.308216 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:57.308216 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:42:57.322840 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:42:57.322840 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:42:57.468494 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:42:57.474691 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:42:57.474691 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:42:57.468882 unknown[1092]: wrote ssh authorized keys file for user: core Mar 4 00:42:57.490338 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:42:57.490338 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 00:42:57.530365 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:42:57.659233 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:57.669144 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 00:42:58.156851 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 00:42:58.705440 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 00:42:58.705440 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 00:42:58.723678 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:42:58.737348 ignition[1092]: INFO : files: files passed Mar 4 00:42:58.737348 ignition[1092]: INFO : Ignition finished successfully Mar 4 00:42:58.732600 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:42:58.759362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:42:58.771535 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:42:58.782475 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:42:58.839815 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.839815 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.784135 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:42:58.858803 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:42:58.817965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:42:58.825407 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:42:58.859363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:42:58.902493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:42:58.902625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:42:58.912066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:42:58.922262 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:42:58.931313 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:42:58.947615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:42:58.968569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:42:58.982357 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:42:58.998642 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:42:59.004314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:42:59.014339 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:42:59.023262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:42:59.023424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:42:59.036525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:42:59.046304 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:42:59.054599 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:42:59.063461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:42:59.073341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:42:59.083029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:42:59.092391 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:42:59.101852 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:42:59.111698 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:42:59.120361 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:42:59.127960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:42:59.128140 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:42:59.140121 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:42:59.149311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:42:59.158732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:42:59.158834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:42:59.169239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:42:59.169396 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:42:59.183919 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:42:59.184085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:42:59.193494 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:42:59.193643 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:42:59.202287 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 00:42:59.202426 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 00:42:59.225207 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:42:59.242434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:42:59.261631 ignition[1145]: INFO : Ignition 2.19.0 Mar 4 00:42:59.261631 ignition[1145]: INFO : Stage: umount Mar 4 00:42:59.261631 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:42:59.261631 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 00:42:59.261631 ignition[1145]: INFO : umount: umount passed Mar 4 00:42:59.261631 ignition[1145]: INFO : Ignition finished successfully Mar 4 00:42:59.251955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:42:59.252136 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:42:59.258446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:42:59.258590 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:42:59.280765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:42:59.281665 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:42:59.283238 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:42:59.296171 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:42:59.296267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:42:59.315767 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:42:59.315815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:42:59.324028 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:42:59.324069 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:42:59.332848 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 00:42:59.332885 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 00:42:59.342223 systemd[1]: Stopped target network.target - Network. Mar 4 00:42:59.350859 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:42:59.350926 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:42:59.360691 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:42:59.368639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:42:59.374147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:42:59.383341 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:42:59.393227 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:42:59.401655 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:42:59.401707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:42:59.410269 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:42:59.410316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:42:59.418533 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:42:59.418580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:42:59.426912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:42:59.426953 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:42:59.435132 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:42:59.444880 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:42:59.459150 systemd-networkd[899]: eth0: DHCPv6 lease lost Mar 4 00:42:59.463552 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:42:59.463670 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:42:59.469441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:42:59.612787 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: Data path switched from VF: enP64497s1 Mar 4 00:42:59.469478 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:42:59.495462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:42:59.502509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:42:59.502587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:42:59.512031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:42:59.525407 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:42:59.525516 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:42:59.546564 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:42:59.546760 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:42:59.557046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:42:59.557289 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:42:59.565235 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:42:59.565274 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:42:59.574553 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:42:59.574603 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:42:59.586727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:42:59.586773 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:42:59.596273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:42:59.596345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:42:59.626355 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:42:59.643000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:42:59.643074 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:42:59.647987 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:42:59.648029 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:42:59.657662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:42:59.657711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:42:59.667734 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:42:59.667788 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:42:59.677045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:42:59.677164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:42:59.688430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:42:59.688479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:42:59.698393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:42:59.698439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:42:59.708238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:42:59.708358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:42:59.726535 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:42:59.728131 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:42:59.895169 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:42:59.895293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:42:59.903888 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:42:59.909223 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:42:59.909286 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:42:59.932336 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:42:59.946384 systemd[1]: Switching root. Mar 4 00:43:00.041989 systemd-journald[217]: Journal stopped Mar 4 00:43:05.286952 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 4 00:43:05.286987 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 00:43:05.286997 kernel: SELinux: policy capability open_perms=1 Mar 4 00:43:05.287007 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 00:43:05.287015 kernel: SELinux: policy capability always_check_network=0 Mar 4 00:43:05.287022 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 00:43:05.287031 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 00:43:05.287039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 00:43:05.287047 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 00:43:05.287055 systemd[1]: Successfully loaded SELinux policy in 181.759ms. Mar 4 00:43:05.287066 kernel: audit: type=1403 audit(1772584981.488:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 00:43:05.287074 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.917ms. Mar 4 00:43:05.287084 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:43:05.287093 systemd[1]: Detected virtualization microsoft. Mar 4 00:43:05.287102 systemd[1]: Detected architecture arm64. Mar 4 00:43:05.287122 systemd[1]: Detected first boot. Mar 4 00:43:05.287134 systemd[1]: Hostname set to . Mar 4 00:43:05.287143 systemd[1]: Initializing machine ID from random generator. Mar 4 00:43:05.287152 zram_generator::config[1187]: No configuration found. Mar 4 00:43:05.287162 systemd[1]: Populated /etc with preset unit settings. Mar 4 00:43:05.287171 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 00:43:05.287182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 00:43:05.287192 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 00:43:05.287201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 00:43:05.287210 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 00:43:05.287220 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 00:43:05.287230 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 00:43:05.287239 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 00:43:05.287250 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 00:43:05.287259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 00:43:05.287268 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 00:43:05.287277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:43:05.287286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:43:05.287295 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 00:43:05.287304 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 00:43:05.287314 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 00:43:05.287323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:43:05.287335 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 4 00:43:05.287344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:43:05.287353 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 00:43:05.287364 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 00:43:05.287374 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 00:43:05.287383 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 00:43:05.287393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:43:05.287403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:43:05.287413 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:43:05.287422 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:43:05.287431 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 00:43:05.287440 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 00:43:05.287450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:43:05.287459 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:43:05.287470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:43:05.287480 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 00:43:05.287489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 00:43:05.287499 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 00:43:05.287508 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 00:43:05.287517 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 00:43:05.287528 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 00:43:05.287539 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 00:43:05.287548 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 00:43:05.287558 systemd[1]: Reached target machines.target - Containers. Mar 4 00:43:05.287567 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 00:43:05.287577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:43:05.287587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:43:05.287596 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 00:43:05.287607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:43:05.287616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:43:05.287626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:43:05.287635 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 00:43:05.287644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:43:05.287654 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 00:43:05.287664 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 00:43:05.287673 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 00:43:05.287683 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 00:43:05.287694 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 00:43:05.287703 kernel: fuse: init (API version 7.39) Mar 4 00:43:05.287712 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:43:05.287721 kernel: loop: module loaded Mar 4 00:43:05.287729 kernel: ACPI: bus type drm_connector registered Mar 4 00:43:05.287739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:43:05.287748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 00:43:05.287772 systemd-journald[1290]: Collecting audit messages is disabled. Mar 4 00:43:05.287792 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 00:43:05.287803 systemd-journald[1290]: Journal started Mar 4 00:43:05.287823 systemd-journald[1290]: Runtime Journal (/run/log/journal/b26fa52ca4e64cba9638835a6b0f8113) is 8.0M, max 78.5M, 70.5M free. Mar 4 00:43:04.360970 systemd[1]: Queued start job for default target multi-user.target. Mar 4 00:43:04.544001 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 4 00:43:04.544332 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 00:43:04.544635 systemd[1]: systemd-journald.service: Consumed 2.521s CPU time. Mar 4 00:43:05.308750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:43:05.317775 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 00:43:05.317832 systemd[1]: Stopped verity-setup.service. Mar 4 00:43:05.330598 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:43:05.331384 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 00:43:05.336001 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 00:43:05.340745 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 00:43:05.345255 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 00:43:05.350406 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 00:43:05.355438 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 00:43:05.359865 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 00:43:05.365638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:43:05.371397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 00:43:05.371527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 00:43:05.376914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:43:05.377039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:43:05.382597 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:43:05.382719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:43:05.387658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:43:05.387783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:43:05.393378 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 00:43:05.393498 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 00:43:05.398784 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:43:05.398900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:43:05.404034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:43:05.409352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 00:43:05.414959 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 00:43:05.420813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:43:05.436265 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 00:43:05.449242 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 00:43:05.455050 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 00:43:05.459850 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 00:43:05.459896 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:43:05.465383 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 00:43:05.471854 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 00:43:05.478245 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 00:43:05.482874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:43:05.484362 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 00:43:05.491310 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 00:43:05.497514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:43:05.498509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 00:43:05.503298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:43:05.506284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:43:05.512419 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 00:43:05.520265 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:43:05.529438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 00:43:05.538871 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 00:43:05.544270 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 00:43:05.550243 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 00:43:05.558213 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 00:43:05.560355 systemd-journald[1290]: Time spent on flushing to /var/log/journal/b26fa52ca4e64cba9638835a6b0f8113 is 17.291ms for 899 entries. Mar 4 00:43:05.560355 systemd-journald[1290]: System Journal (/var/log/journal/b26fa52ca4e64cba9638835a6b0f8113) is 8.0M, max 2.6G, 2.6G free. Mar 4 00:43:05.623261 systemd-journald[1290]: Received client request to flush runtime journal. Mar 4 00:43:05.623323 kernel: loop0: detected capacity change from 0 to 209336 Mar 4 00:43:05.573177 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 00:43:05.592379 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 00:43:05.598156 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 00:43:05.625407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 00:43:05.634735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:43:05.649168 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 00:43:05.654545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 00:43:05.656584 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 00:43:05.668634 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 4 00:43:05.668650 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 4 00:43:05.672828 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:43:05.687262 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 00:43:05.706132 kernel: loop1: detected capacity change from 0 to 114328 Mar 4 00:43:05.837975 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 00:43:05.849288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:43:05.866049 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 4 00:43:05.866067 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Mar 4 00:43:05.869712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:43:06.134121 kernel: loop2: detected capacity change from 0 to 31320 Mar 4 00:43:06.332637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 00:43:06.343301 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:43:06.368169 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Mar 4 00:43:06.530888 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:43:06.547632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:43:06.591421 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 4 00:43:06.618334 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 00:43:06.620140 kernel: loop3: detected capacity change from 0 to 114432 Mar 4 00:43:06.680847 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 00:43:06.687136 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 00:43:06.738279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 00:43:06.742711 kernel: hv_vmbus: registering driver hyperv_fb Mar 4 00:43:06.742807 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 4 00:43:06.751969 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 4 00:43:06.760978 kernel: hv_vmbus: registering driver hv_balloon Mar 4 00:43:06.761065 kernel: Console: switching to colour dummy device 80x25 Mar 4 00:43:06.765019 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 4 00:43:06.765194 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 4 00:43:06.773600 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 00:43:06.786497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:43:06.804392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:43:06.804557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:43:06.816561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:43:06.851368 systemd-networkd[1361]: lo: Link UP Mar 4 00:43:06.851376 systemd-networkd[1361]: lo: Gained carrier Mar 4 00:43:06.853680 systemd-networkd[1361]: Enumeration completed Mar 4 00:43:06.853958 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:43:06.854438 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:43:06.854444 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:43:06.863734 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1358) Mar 4 00:43:06.873304 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 00:43:06.919474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 00:43:06.929126 kernel: mlx5_core fbf1:00:02.0 enP64497s1: Link up Mar 4 00:43:06.932625 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 00:43:06.955484 kernel: hv_netvsc 7ced8dc7-d9d9-7ced-8dc7-d9d97ced8dc7 eth0: Data path switched to VF: enP64497s1 Mar 4 00:43:06.954588 systemd-networkd[1361]: enP64497s1: Link UP Mar 4 00:43:06.954687 systemd-networkd[1361]: eth0: Link UP Mar 4 00:43:06.954690 systemd-networkd[1361]: eth0: Gained carrier Mar 4 00:43:06.954705 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:43:06.959513 systemd-networkd[1361]: enP64497s1: Gained carrier Mar 4 00:43:06.972165 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:43:07.010958 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 00:43:07.022407 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 00:43:07.040143 kernel: loop4: detected capacity change from 0 to 209336 Mar 4 00:43:07.043780 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 00:43:07.061454 kernel: loop5: detected capacity change from 0 to 114328 Mar 4 00:43:07.075144 kernel: loop6: detected capacity change from 0 to 31320 Mar 4 00:43:07.089195 kernel: loop7: detected capacity change from 0 to 114432 Mar 4 00:43:07.098503 (sd-merge)[1444]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 4 00:43:07.098925 (sd-merge)[1444]: Merged extensions into '/usr'. Mar 4 00:43:07.102810 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 00:43:07.102828 systemd[1]: Reloading... Mar 4 00:43:07.154662 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:43:07.185221 zram_generator::config[1474]: No configuration found. Mar 4 00:43:07.323614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:43:07.398377 systemd[1]: Reloading finished in 295 ms. Mar 4 00:43:07.435319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:43:07.441389 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 00:43:07.447589 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 00:43:07.457118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:43:07.467386 systemd[1]: Starting ensure-sysext.service... Mar 4 00:43:07.473325 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 00:43:07.480381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:43:07.483003 lvm[1535]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:43:07.499187 systemd[1]: Reloading requested from client PID 1534 ('systemctl') (unit ensure-sysext.service)... Mar 4 00:43:07.499211 systemd[1]: Reloading... Mar 4 00:43:07.528663 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 00:43:07.529750 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 00:43:07.531351 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 00:43:07.531694 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 4 00:43:07.531807 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 4 00:43:07.536237 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:43:07.538227 systemd-tmpfiles[1536]: Skipping /boot Mar 4 00:43:07.551319 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:43:07.551445 systemd-tmpfiles[1536]: Skipping /boot Mar 4 00:43:07.585133 zram_generator::config[1563]: No configuration found. Mar 4 00:43:07.697187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:43:07.774352 systemd[1]: Reloading finished in 274 ms. Mar 4 00:43:07.795651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 00:43:07.803261 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:43:07.819510 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:43:07.826725 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 00:43:07.834489 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 00:43:07.849616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:43:07.855769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 00:43:07.865967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:43:07.869398 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:43:07.878592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:43:07.892393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:43:07.900548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:43:07.901684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:43:07.902958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:43:07.912026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:43:07.912219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:43:07.920898 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:43:07.921033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:43:07.932948 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 00:43:07.949231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:43:07.955359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:43:07.964142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:43:07.974425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:43:07.986228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:43:07.993344 augenrules[1654]: No rules Mar 4 00:43:07.993373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:43:07.993569 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 00:43:08.002141 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:43:08.007795 systemd-resolved[1634]: Positive Trust Anchors: Mar 4 00:43:08.007812 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:43:08.007844 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:43:08.009964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:43:08.010132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:43:08.015442 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:43:08.015581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:43:08.022096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:43:08.023166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:43:08.026984 systemd-resolved[1634]: Using system hostname 'ci-4081.3.6-n-d3c3414975'. Mar 4 00:43:08.029518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:43:08.035017 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:43:08.035307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:43:08.047472 systemd[1]: Finished ensure-sysext.service. Mar 4 00:43:08.054590 systemd[1]: Reached target network.target - Network. Mar 4 00:43:08.058633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:43:08.064041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:43:08.064103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:43:08.101806 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 00:43:08.638291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 00:43:08.644517 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 00:43:08.704268 systemd-networkd[1361]: eth0: Gained IPv6LL Mar 4 00:43:08.706518 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 00:43:08.712622 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 00:43:12.324898 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 00:43:12.339150 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 00:43:12.351247 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 00:43:12.363102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 00:43:12.368249 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:43:12.373045 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 00:43:12.378565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 00:43:12.384063 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 00:43:12.388813 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 00:43:12.394462 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 00:43:12.400051 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 00:43:12.400087 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:43:12.404087 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:43:12.409048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 00:43:12.415214 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 00:43:12.423723 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 00:43:12.428816 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 00:43:12.433789 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:43:12.437965 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:43:12.442059 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:43:12.442084 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:43:12.454192 systemd[1]: Starting chronyd.service - NTP client/server... Mar 4 00:43:12.460250 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 00:43:12.477528 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 4 00:43:12.479131 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 00:43:12.488020 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 00:43:12.493327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 00:43:12.501237 jq[1682]: false Mar 4 00:43:12.503247 chronyd[1685]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 4 00:43:12.510704 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 00:43:12.515270 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 00:43:12.515308 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 4 00:43:12.516320 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 4 00:43:12.520901 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 4 00:43:12.522953 KVP[1686]: KVP starting; pid is:1686 Mar 4 00:43:12.523369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:43:12.531301 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 00:43:12.538312 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 00:43:12.555213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 00:43:12.560859 chronyd[1685]: Timezone right/UTC failed leap second check, ignoring Mar 4 00:43:12.561052 chronyd[1685]: Loaded seccomp filter (level 2) Mar 4 00:43:12.561441 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 00:43:12.571330 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 00:43:12.589326 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 00:43:12.595270 extend-filesystems[1683]: Found loop4 Mar 4 00:43:12.595270 extend-filesystems[1683]: Found loop5 Mar 4 00:43:12.595270 extend-filesystems[1683]: Found loop6 Mar 4 00:43:12.595270 extend-filesystems[1683]: Found loop7 Mar 4 00:43:12.595270 extend-filesystems[1683]: Found sda Mar 4 00:43:12.595270 extend-filesystems[1683]: Found sda1 Mar 4 00:43:12.595270 extend-filesystems[1683]: Found sda2 Mar 4 00:43:12.709680 kernel: hv_utils: KVP IC version 4.0 Mar 4 00:43:12.608806 KVP[1686]: KVP LIC Version: 3.1 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sda3 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found usr Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sda4 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sda6 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sda7 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sda9 Mar 4 00:43:12.709990 extend-filesystems[1683]: Checking size of /dev/sda9 Mar 4 00:43:12.709990 extend-filesystems[1683]: Old size kept for /dev/sda9 Mar 4 00:43:12.709990 extend-filesystems[1683]: Found sr0 Mar 4 00:43:12.599343 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 00:43:12.706366 dbus-daemon[1679]: [system] SELinux support is enabled Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.822 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.829 INFO Fetch successful Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.829 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.840 INFO Fetch successful Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.842 INFO Fetching http://168.63.129.16/machine/17ad12d6-c5ad-4fc3-9d5d-9bd56cb95f7a/1fdbc3c6%2D2153%2D4401%2D9eb1%2D96d72f063982.%5Fci%2D4081.3.6%2Dn%2Dd3c3414975?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.846 INFO Fetch successful Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.846 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 4 00:43:12.943673 coreos-metadata[1678]: Mar 04 00:43:12.860 INFO Fetch successful Mar 4 00:43:12.599808 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 00:43:12.944477 jq[1708]: true Mar 4 00:43:12.600511 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 00:43:12.945527 update_engine[1704]: I20260304 00:43:12.722738 1704 main.cc:92] Flatcar Update Engine starting Mar 4 00:43:12.945527 update_engine[1704]: I20260304 00:43:12.731770 1704 update_check_scheduler.cc:74] Next update check in 7m49s Mar 4 00:43:12.617387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 00:43:12.629698 systemd[1]: Started chronyd.service - NTP client/server. Mar 4 00:43:12.947320 tar[1717]: linux-arm64/LICENSE Mar 4 00:43:12.947320 tar[1717]: linux-arm64/helm Mar 4 00:43:12.642577 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 00:43:12.644149 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 00:43:12.947712 jq[1720]: true Mar 4 00:43:12.645447 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 00:43:12.645625 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 00:43:12.655520 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 00:43:12.675151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 00:43:12.675320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 00:43:12.688976 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 00:43:12.689202 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 00:43:12.712881 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 00:43:12.724515 (ntainerd)[1721]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 00:43:12.746902 systemd[1]: Started update-engine.service - Update Engine. Mar 4 00:43:12.747441 systemd-logind[1698]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 4 00:43:12.756412 systemd-logind[1698]: New seat seat0. Mar 4 00:43:12.760322 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 00:43:12.790334 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 00:43:12.790539 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 00:43:12.819384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 00:43:12.819575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 00:43:12.868536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 00:43:12.921806 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 00:43:12.952622 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 00:43:12.992167 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1739) Mar 4 00:43:12.996516 bash[1764]: Updated "/home/core/.ssh/authorized_keys" Mar 4 00:43:13.004364 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 00:43:13.017183 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 00:43:13.178518 locksmithd[1750]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 00:43:13.500946 tar[1717]: linux-arm64/README.md Mar 4 00:43:13.515720 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 00:43:13.630117 containerd[1721]: time="2026-03-04T00:43:13.629307840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 00:43:13.685747 containerd[1721]: time="2026-03-04T00:43:13.685704800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690250 containerd[1721]: time="2026-03-04T00:43:13.690216280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690250 containerd[1721]: time="2026-03-04T00:43:13.690249120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 00:43:13.690327 containerd[1721]: time="2026-03-04T00:43:13.690265440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 00:43:13.690434 containerd[1721]: time="2026-03-04T00:43:13.690417600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 00:43:13.690463 containerd[1721]: time="2026-03-04T00:43:13.690438840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690511 containerd[1721]: time="2026-03-04T00:43:13.690496400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690537 containerd[1721]: time="2026-03-04T00:43:13.690511200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690694 containerd[1721]: time="2026-03-04T00:43:13.690676880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690723 containerd[1721]: time="2026-03-04T00:43:13.690694320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690723 containerd[1721]: time="2026-03-04T00:43:13.690709560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:43:13.690723 containerd[1721]: time="2026-03-04T00:43:13.690718920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.691002 containerd[1721]: time="2026-03-04T00:43:13.690787680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.691002 containerd[1721]: time="2026-03-04T00:43:13.690966440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:43:13.691079 containerd[1721]: time="2026-03-04T00:43:13.691062120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:43:13.691116 containerd[1721]: time="2026-03-04T00:43:13.691077440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 00:43:13.691194 containerd[1721]: time="2026-03-04T00:43:13.691177840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 00:43:13.691236 containerd[1721]: time="2026-03-04T00:43:13.691223600Z" level=info msg="metadata content store policy set" policy=shared Mar 4 00:43:13.704954 containerd[1721]: time="2026-03-04T00:43:13.704918400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 00:43:13.705021 containerd[1721]: time="2026-03-04T00:43:13.704979880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 00:43:13.705021 containerd[1721]: time="2026-03-04T00:43:13.704996920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 00:43:13.705021 containerd[1721]: time="2026-03-04T00:43:13.705011760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 00:43:13.705094 containerd[1721]: time="2026-03-04T00:43:13.705026440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705196040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705418760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705507640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705521880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705543240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705558360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705571560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705584800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705598480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705612400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705628440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705640320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705653960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 00:43:13.706778 containerd[1721]: time="2026-03-04T00:43:13.705672600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705685320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705696960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705710120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705723080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705739160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705750120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705763520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705775600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705788840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705800440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705812640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705827920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705844200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705863520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707031 containerd[1721]: time="2026-03-04T00:43:13.705875320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.707333 containerd[1721]: time="2026-03-04T00:43:13.705885680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 00:43:13.708742 containerd[1721]: time="2026-03-04T00:43:13.708719320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 00:43:13.708769 containerd[1721]: time="2026-03-04T00:43:13.708754280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 00:43:13.708789 containerd[1721]: time="2026-03-04T00:43:13.708765960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 00:43:13.708789 containerd[1721]: time="2026-03-04T00:43:13.708777760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 00:43:13.708834 containerd[1721]: time="2026-03-04T00:43:13.708787160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.708834 containerd[1721]: time="2026-03-04T00:43:13.708804240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 00:43:13.708834 containerd[1721]: time="2026-03-04T00:43:13.708814320Z" level=info msg="NRI interface is disabled by configuration." Mar 4 00:43:13.708834 containerd[1721]: time="2026-03-04T00:43:13.708824760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 00:43:13.709172 containerd[1721]: time="2026-03-04T00:43:13.709120320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 00:43:13.709276 containerd[1721]: time="2026-03-04T00:43:13.709181240Z" level=info msg="Connect containerd service" Mar 4 00:43:13.709276 containerd[1721]: time="2026-03-04T00:43:13.709210400Z" level=info msg="using legacy CRI server" Mar 4 00:43:13.709276 containerd[1721]: time="2026-03-04T00:43:13.709216520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 00:43:13.709330 containerd[1721]: time="2026-03-04T00:43:13.709301040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711215320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711694920Z" level=info msg="Start subscribing containerd event" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711743200Z" level=info msg="Start recovering state" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711802840Z" level=info msg="Start event monitor" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711813800Z" level=info msg="Start snapshots syncer" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711823080Z" level=info msg="Start cni network conf syncer for default" Mar 4 00:43:13.712647 containerd[1721]: time="2026-03-04T00:43:13.711830560Z" level=info msg="Start streaming server" Mar 4 00:43:13.712955 containerd[1721]: time="2026-03-04T00:43:13.712936560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 00:43:13.713140 containerd[1721]: time="2026-03-04T00:43:13.713126680Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 00:43:13.713263 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 00:43:13.718135 containerd[1721]: time="2026-03-04T00:43:13.716722600Z" level=info msg="containerd successfully booted in 0.089527s" Mar 4 00:43:13.773260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:43:13.780815 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:43:13.894531 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 00:43:13.918202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 00:43:13.931351 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 00:43:13.940224 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 4 00:43:13.945967 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 00:43:13.946277 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 00:43:13.959234 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 00:43:13.970355 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 4 00:43:13.986208 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 00:43:13.997630 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 00:43:14.006264 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 4 00:43:14.011800 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 00:43:14.016131 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 00:43:14.021628 systemd[1]: Startup finished in 624ms (kernel) + 12.360s (initrd) + 12.714s (userspace) = 25.700s. Mar 4 00:43:14.273675 kubelet[1814]: E0304 00:43:14.273611 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:43:14.276522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:43:14.276673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:43:14.328124 login[1842]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 4 00:43:14.328920 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:14.336017 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 00:43:14.340352 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 00:43:14.343218 systemd-logind[1698]: New session 2 of user core. Mar 4 00:43:14.367770 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 00:43:14.374635 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 00:43:14.394469 (systemd)[1851]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 00:43:14.541330 systemd[1851]: Queued start job for default target default.target. Mar 4 00:43:14.552100 systemd[1851]: Created slice app.slice - User Application Slice. Mar 4 00:43:14.552152 systemd[1851]: Reached target paths.target - Paths. Mar 4 00:43:14.552166 systemd[1851]: Reached target timers.target - Timers. Mar 4 00:43:14.555372 systemd[1851]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 00:43:14.565099 systemd[1851]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 00:43:14.565176 systemd[1851]: Reached target sockets.target - Sockets. Mar 4 00:43:14.565189 systemd[1851]: Reached target basic.target - Basic System. Mar 4 00:43:14.565233 systemd[1851]: Reached target default.target - Main User Target. Mar 4 00:43:14.565262 systemd[1851]: Startup finished in 164ms. Mar 4 00:43:14.565651 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 00:43:14.573470 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 00:43:15.329594 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:15.333567 systemd-logind[1698]: New session 1 of user core. Mar 4 00:43:15.338238 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 00:43:15.867648 waagent[1838]: 2026-03-04T00:43:15.863076Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 4 00:43:15.868215 waagent[1838]: 2026-03-04T00:43:15.868150Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 4 00:43:15.871855 waagent[1838]: 2026-03-04T00:43:15.871803Z INFO Daemon Daemon Python: 3.11.9 Mar 4 00:43:15.875504 waagent[1838]: 2026-03-04T00:43:15.875444Z INFO Daemon Daemon Run daemon Mar 4 00:43:15.878951 waagent[1838]: 2026-03-04T00:43:15.878905Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 4 00:43:15.886168 waagent[1838]: 2026-03-04T00:43:15.886101Z INFO Daemon Daemon Using waagent for provisioning Mar 4 00:43:15.890516 waagent[1838]: 2026-03-04T00:43:15.890472Z INFO Daemon Daemon Activate resource disk Mar 4 00:43:15.894331 waagent[1838]: 2026-03-04T00:43:15.894284Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 4 00:43:15.903914 waagent[1838]: 2026-03-04T00:43:15.903855Z INFO Daemon Daemon Found device: None Mar 4 00:43:15.907576 waagent[1838]: 2026-03-04T00:43:15.907525Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 4 00:43:15.914201 waagent[1838]: 2026-03-04T00:43:15.914145Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 4 00:43:15.924777 waagent[1838]: 2026-03-04T00:43:15.924715Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:43:15.929568 waagent[1838]: 2026-03-04T00:43:15.929515Z INFO Daemon Daemon Running default provisioning handler Mar 4 00:43:15.940388 waagent[1838]: 2026-03-04T00:43:15.940315Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 4 00:43:15.952205 waagent[1838]: 2026-03-04T00:43:15.952142Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 4 00:43:15.959967 waagent[1838]: 2026-03-04T00:43:15.959906Z INFO Daemon Daemon cloud-init is enabled: False Mar 4 00:43:15.964247 waagent[1838]: 2026-03-04T00:43:15.964191Z INFO Daemon Daemon Copying ovf-env.xml Mar 4 00:43:16.098317 waagent[1838]: 2026-03-04T00:43:16.098232Z INFO Daemon Daemon Successfully mounted dvd Mar 4 00:43:16.130335 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 4 00:43:16.131406 waagent[1838]: 2026-03-04T00:43:16.131332Z INFO Daemon Daemon Detect protocol endpoint Mar 4 00:43:16.135774 waagent[1838]: 2026-03-04T00:43:16.135721Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 00:43:16.140322 waagent[1838]: 2026-03-04T00:43:16.140282Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 4 00:43:16.145185 waagent[1838]: 2026-03-04T00:43:16.145151Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 4 00:43:16.149328 waagent[1838]: 2026-03-04T00:43:16.149292Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 4 00:43:16.153543 waagent[1838]: 2026-03-04T00:43:16.153508Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 4 00:43:16.221354 waagent[1838]: 2026-03-04T00:43:16.221310Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 4 00:43:16.226654 waagent[1838]: 2026-03-04T00:43:16.226628Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 4 00:43:16.231198 waagent[1838]: 2026-03-04T00:43:16.231159Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 4 00:43:16.690489 waagent[1838]: 2026-03-04T00:43:16.690394Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 4 00:43:16.695648 waagent[1838]: 2026-03-04T00:43:16.695595Z INFO Daemon Daemon Forcing an update of the goal state. Mar 4 00:43:16.703464 waagent[1838]: 2026-03-04T00:43:16.703421Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:43:16.722816 waagent[1838]: 2026-03-04T00:43:16.722772Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 4 00:43:16.727534 waagent[1838]: 2026-03-04T00:43:16.727492Z INFO Daemon Mar 4 00:43:16.729770 waagent[1838]: 2026-03-04T00:43:16.729730Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ed7a9a35-8dfa-46cc-b5d1-c5d26b43eb49 eTag: 7240242637158173164 source: Fabric] Mar 4 00:43:16.738840 waagent[1838]: 2026-03-04T00:43:16.738799Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 4 00:43:16.744334 waagent[1838]: 2026-03-04T00:43:16.744295Z INFO Daemon Mar 4 00:43:16.746582 waagent[1838]: 2026-03-04T00:43:16.746546Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:43:16.755675 waagent[1838]: 2026-03-04T00:43:16.755643Z INFO Daemon Daemon Downloading artifacts profile blob Mar 4 00:43:16.836043 waagent[1838]: 2026-03-04T00:43:16.835953Z INFO Daemon Downloaded certificate {'thumbprint': '5A425BF4C5BDEA9F153E870283CAD3E4916A0A25', 'hasPrivateKey': True} Mar 4 00:43:16.844212 waagent[1838]: 2026-03-04T00:43:16.844167Z INFO Daemon Fetch goal state completed Mar 4 00:43:16.854527 waagent[1838]: 2026-03-04T00:43:16.854484Z INFO Daemon Daemon Starting provisioning Mar 4 00:43:16.858709 waagent[1838]: 2026-03-04T00:43:16.858651Z INFO Daemon Daemon Handle ovf-env.xml. Mar 4 00:43:16.862323 waagent[1838]: 2026-03-04T00:43:16.862282Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-d3c3414975] Mar 4 00:43:16.943142 waagent[1838]: 2026-03-04T00:43:16.943056Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-d3c3414975] Mar 4 00:43:16.948360 waagent[1838]: 2026-03-04T00:43:16.948303Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 4 00:43:16.953378 waagent[1838]: 2026-03-04T00:43:16.953333Z INFO Daemon Daemon Primary interface is [eth0] Mar 4 00:43:16.983293 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:43:16.983303 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:43:16.983347 systemd-networkd[1361]: eth0: DHCP lease lost Mar 4 00:43:16.984647 waagent[1838]: 2026-03-04T00:43:16.984564Z INFO Daemon Daemon Create user account if not exists Mar 4 00:43:16.989054 waagent[1838]: 2026-03-04T00:43:16.989007Z INFO Daemon Daemon User core already exists, skip useradd Mar 4 00:43:16.993573 systemd-networkd[1361]: eth0: DHCPv6 lease lost Mar 4 00:43:16.994191 waagent[1838]: 2026-03-04T00:43:16.994131Z INFO Daemon Daemon Configure sudoer Mar 4 00:43:16.997910 waagent[1838]: 2026-03-04T00:43:16.997864Z INFO Daemon Daemon Configure sshd Mar 4 00:43:17.001596 waagent[1838]: 2026-03-04T00:43:17.001546Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 4 00:43:17.013294 waagent[1838]: 2026-03-04T00:43:17.013228Z INFO Daemon Daemon Deploy ssh public key. Mar 4 00:43:17.022174 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 00:43:18.100356 waagent[1838]: 2026-03-04T00:43:18.100294Z INFO Daemon Daemon Provisioning complete Mar 4 00:43:18.116824 waagent[1838]: 2026-03-04T00:43:18.116779Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 4 00:43:18.121555 waagent[1838]: 2026-03-04T00:43:18.121505Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 4 00:43:18.129292 waagent[1838]: 2026-03-04T00:43:18.129247Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 4 00:43:18.407954 waagent[1900]: 2026-03-04T00:43:18.407877Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 4 00:43:18.408283 waagent[1900]: 2026-03-04T00:43:18.408027Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 4 00:43:18.408283 waagent[1900]: 2026-03-04T00:43:18.408078Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 4 00:43:23.634253 waagent[1900]: 2026-03-04T00:43:23.507227Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 4 00:43:23.685132 waagent[1900]: 2026-03-04T00:43:23.683440Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:43:23.685132 waagent[1900]: 2026-03-04T00:43:23.683576Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:43:23.731944 waagent[1900]: 2026-03-04T00:43:23.731854Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 00:43:23.740232 waagent[1900]: 2026-03-04T00:43:23.740184Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 4 00:43:23.740876 waagent[1900]: 2026-03-04T00:43:23.740837Z INFO ExtHandler Mar 4 00:43:23.741042 waagent[1900]: 2026-03-04T00:43:23.741010Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7f462925-b696-4379-b1c6-f6927d13c5ee eTag: 7240242637158173164 source: Fabric] Mar 4 00:43:23.741427 waagent[1900]: 2026-03-04T00:43:23.741390Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 4 00:43:23.892453 waagent[1900]: 2026-03-04T00:43:23.892285Z INFO ExtHandler Mar 4 00:43:23.892703 waagent[1900]: 2026-03-04T00:43:23.892665Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 4 00:43:23.897160 waagent[1900]: 2026-03-04T00:43:23.897120Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 4 00:43:24.404168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 00:43:24.409271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:43:28.483213 waagent[1900]: 2026-03-04T00:43:28.483117Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A425BF4C5BDEA9F153E870283CAD3E4916A0A25', 'hasPrivateKey': True} Mar 4 00:43:28.483800 waagent[1900]: 2026-03-04T00:43:28.483749Z INFO ExtHandler Fetch goal state completed Mar 4 00:43:28.500822 waagent[1900]: 2026-03-04T00:43:28.500772Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1900 Mar 4 00:43:28.500966 waagent[1900]: 2026-03-04T00:43:28.500935Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 4 00:43:28.502538 waagent[1900]: 2026-03-04T00:43:28.502498Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 4 00:43:28.502888 waagent[1900]: 2026-03-04T00:43:28.502854Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 4 00:43:30.457465 waagent[1900]: 2026-03-04T00:43:30.457421Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 4 00:43:30.457767 waagent[1900]: 2026-03-04T00:43:30.457625Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 4 00:43:30.463686 waagent[1900]: 2026-03-04T00:43:30.463635Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 4 00:43:30.470522 systemd[1]: Reloading requested from client PID 1918 ('systemctl') (unit waagent.service)... Mar 4 00:43:30.470537 systemd[1]: Reloading... Mar 4 00:43:30.550136 zram_generator::config[1959]: No configuration found. Mar 4 00:43:30.645623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:43:30.719800 systemd[1]: Reloading finished in 248 ms. Mar 4 00:43:30.744569 waagent[1900]: 2026-03-04T00:43:30.744392Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 4 00:43:30.749959 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit waagent.service)... Mar 4 00:43:30.749972 systemd[1]: Reloading... Mar 4 00:43:30.829142 zram_generator::config[2042]: No configuration found. Mar 4 00:43:30.930885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:43:31.005866 systemd[1]: Reloading finished in 255 ms. Mar 4 00:43:31.029385 waagent[1900]: 2026-03-04T00:43:31.029303Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 4 00:43:31.029487 waagent[1900]: 2026-03-04T00:43:31.029462Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 4 00:43:33.091089 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 00:43:33.092429 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:51044.service - OpenSSH per-connection server daemon (10.200.16.10:51044). Mar 4 00:43:34.468979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:43:34.473808 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:43:34.509090 kubelet[2109]: E0304 00:43:34.509013 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:43:34.512714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:43:34.512992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:43:34.938854 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 51044 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:34.940271 sshd[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:34.944619 systemd-logind[1698]: New session 3 of user core. Mar 4 00:43:34.953330 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 00:43:35.325196 waagent[1900]: 2026-03-04T00:43:35.324592Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 4 00:43:35.325486 waagent[1900]: 2026-03-04T00:43:35.325226Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 4 00:43:35.326821 waagent[1900]: 2026-03-04T00:43:35.326034Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:43:35.326821 waagent[1900]: 2026-03-04T00:43:35.326154Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:43:35.326821 waagent[1900]: 2026-03-04T00:43:35.326360Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 4 00:43:35.326821 waagent[1900]: 2026-03-04T00:43:35.326532Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 4 00:43:35.326821 waagent[1900]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 4 00:43:35.326821 waagent[1900]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 4 00:43:35.326821 waagent[1900]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 4 00:43:35.326821 waagent[1900]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:43:35.326821 waagent[1900]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:43:35.326821 waagent[1900]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 00:43:35.327060 waagent[1900]: 2026-03-04T00:43:35.326819Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 4 00:43:35.327252 waagent[1900]: 2026-03-04T00:43:35.327204Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 00:43:35.327321 waagent[1900]: 2026-03-04T00:43:35.327294Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 00:43:35.327476 waagent[1900]: 2026-03-04T00:43:35.327420Z INFO EnvHandler ExtHandler Configure routes Mar 4 00:43:35.327512 waagent[1900]: 2026-03-04T00:43:35.327482Z INFO EnvHandler ExtHandler Gateway:None Mar 4 00:43:35.327547 waagent[1900]: 2026-03-04T00:43:35.327525Z INFO EnvHandler ExtHandler Routes:None Mar 4 00:43:35.327852 waagent[1900]: 2026-03-04T00:43:35.327819Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 4 00:43:35.328134 waagent[1900]: 2026-03-04T00:43:35.328064Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 4 00:43:35.328198 waagent[1900]: 2026-03-04T00:43:35.328126Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 4 00:43:35.328609 waagent[1900]: 2026-03-04T00:43:35.328445Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 4 00:43:35.328742 waagent[1900]: 2026-03-04T00:43:35.328611Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 4 00:43:35.329127 waagent[1900]: 2026-03-04T00:43:35.329069Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 4 00:43:35.337047 waagent[1900]: 2026-03-04T00:43:35.337001Z INFO ExtHandler ExtHandler Mar 4 00:43:35.337162 waagent[1900]: 2026-03-04T00:43:35.337103Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a01b3f11-cac4-4996-8b16-3af99f9761c5 correlation 075961d9-166c-4979-8ee4-e08f25b8b70f created: 2026-03-04T00:42:17.094193Z] Mar 4 00:43:35.337541 waagent[1900]: 2026-03-04T00:43:35.337496Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 4 00:43:35.338081 waagent[1900]: 2026-03-04T00:43:35.338046Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 4 00:43:35.373150 waagent[1900]: 2026-03-04T00:43:35.371861Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AD01A600-BC2F-4F3A-B82B-2CBBE923F4EB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 4 00:43:35.376391 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:51046.service - OpenSSH per-connection server daemon (10.200.16.10:51046). Mar 4 00:43:35.414636 waagent[1900]: 2026-03-04T00:43:35.414555Z INFO MonitorHandler ExtHandler Network interfaces: Mar 4 00:43:35.414636 waagent[1900]: Executing ['ip', '-a', '-o', 'link']: Mar 4 00:43:35.414636 waagent[1900]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 4 00:43:35.414636 waagent[1900]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c7:d9:d9 brd ff:ff:ff:ff:ff:ff Mar 4 00:43:35.414636 waagent[1900]: 3: enP64497s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c7:d9:d9 brd ff:ff:ff:ff:ff:ff\ altname enP64497p0s2 Mar 4 00:43:35.414636 waagent[1900]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 4 00:43:35.414636 waagent[1900]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 4 00:43:35.414636 waagent[1900]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 4 00:43:35.414636 waagent[1900]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 4 00:43:35.414636 waagent[1900]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 4 00:43:35.414636 waagent[1900]: 2: eth0 inet6 fe80::7eed:8dff:fec7:d9d9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 4 00:43:35.455198 waagent[1900]: 2026-03-04T00:43:35.455038Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 4 00:43:35.455198 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.455198 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.455198 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.455198 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.455198 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.455198 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.455198 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:43:35.455198 waagent[1900]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:43:35.455198 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:43:35.457899 waagent[1900]: 2026-03-04T00:43:35.457843Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 4 00:43:35.457899 waagent[1900]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.457899 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.457899 waagent[1900]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.457899 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.457899 waagent[1900]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 00:43:35.457899 waagent[1900]: pkts bytes target prot opt in out source destination Mar 4 00:43:35.457899 waagent[1900]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 00:43:35.457899 waagent[1900]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 00:43:35.457899 waagent[1900]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 00:43:35.458146 waagent[1900]: 2026-03-04T00:43:35.458095Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 4 00:43:35.857971 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 51046 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:35.859309 sshd[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:35.863707 systemd-logind[1698]: New session 4 of user core. Mar 4 00:43:35.869246 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 00:43:36.210329 sshd[2130]: pam_unix(sshd:session): session closed for user core Mar 4 00:43:36.213632 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Mar 4 00:43:36.214097 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:51046.service: Deactivated successfully. Mar 4 00:43:36.215605 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 00:43:36.216511 systemd-logind[1698]: Removed session 4. Mar 4 00:43:36.297826 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:51048.service - OpenSSH per-connection server daemon (10.200.16.10:51048). Mar 4 00:43:36.350459 chronyd[1685]: Selected source PHC0 Mar 4 00:43:36.784836 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 51048 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:36.786244 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:36.789736 systemd-logind[1698]: New session 5 of user core. Mar 4 00:43:36.801309 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 00:43:37.133816 sshd[2153]: pam_unix(sshd:session): session closed for user core Mar 4 00:43:37.137542 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:51048.service: Deactivated successfully. Mar 4 00:43:37.139307 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 00:43:37.140050 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Mar 4 00:43:37.141175 systemd-logind[1698]: Removed session 5. Mar 4 00:43:37.221006 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:51050.service - OpenSSH per-connection server daemon (10.200.16.10:51050). Mar 4 00:43:37.709138 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 51050 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:37.709977 sshd[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:37.714476 systemd-logind[1698]: New session 6 of user core. Mar 4 00:43:37.720283 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 00:43:38.061961 sshd[2160]: pam_unix(sshd:session): session closed for user core Mar 4 00:43:38.065852 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:51050.service: Deactivated successfully. Mar 4 00:43:38.067352 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 00:43:38.067926 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Mar 4 00:43:38.068688 systemd-logind[1698]: Removed session 6. Mar 4 00:43:38.153953 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:51064.service - OpenSSH per-connection server daemon (10.200.16.10:51064). Mar 4 00:43:38.645140 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 51064 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:38.645993 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:38.650452 systemd-logind[1698]: New session 7 of user core. Mar 4 00:43:38.657296 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 00:43:39.060212 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 00:43:39.060493 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:43:39.074306 sudo[2170]: pam_unix(sudo:session): session closed for user root Mar 4 00:43:39.152404 sshd[2167]: pam_unix(sshd:session): session closed for user core Mar 4 00:43:39.156591 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:51064.service: Deactivated successfully. Mar 4 00:43:39.158173 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 00:43:39.158832 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Mar 4 00:43:39.159696 systemd-logind[1698]: Removed session 7. Mar 4 00:43:39.238697 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:51066.service - OpenSSH per-connection server daemon (10.200.16.10:51066). Mar 4 00:43:39.725283 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 51066 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:39.726634 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:39.730043 systemd-logind[1698]: New session 8 of user core. Mar 4 00:43:39.740455 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 00:43:39.999397 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 00:43:39.999667 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:43:40.002610 sudo[2179]: pam_unix(sudo:session): session closed for user root Mar 4 00:43:40.007145 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 00:43:40.007412 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:43:40.022395 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 00:43:40.023830 auditctl[2182]: No rules Mar 4 00:43:40.024994 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 00:43:40.025207 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 00:43:40.027242 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:43:40.058284 augenrules[2200]: No rules Mar 4 00:43:40.059798 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:43:40.061270 sudo[2178]: pam_unix(sudo:session): session closed for user root Mar 4 00:43:40.138785 sshd[2175]: pam_unix(sshd:session): session closed for user core Mar 4 00:43:40.142788 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:51066.service: Deactivated successfully. Mar 4 00:43:40.144543 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 00:43:40.145374 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Mar 4 00:43:40.146278 systemd-logind[1698]: Removed session 8. Mar 4 00:43:40.226625 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:45922.service - OpenSSH per-connection server daemon (10.200.16.10:45922). Mar 4 00:43:40.716977 sshd[2208]: Accepted publickey for core from 10.200.16.10 port 45922 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:43:40.717787 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:43:40.722509 systemd-logind[1698]: New session 9 of user core. Mar 4 00:43:40.732261 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 00:43:40.991960 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 00:43:40.992237 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 00:43:42.008322 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 00:43:42.008450 (dockerd)[2226]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 00:43:42.771361 dockerd[2226]: time="2026-03-04T00:43:42.771304603Z" level=info msg="Starting up" Mar 4 00:43:43.164386 dockerd[2226]: time="2026-03-04T00:43:43.164347076Z" level=info msg="Loading containers: start." Mar 4 00:43:43.329145 kernel: Initializing XFRM netlink socket Mar 4 00:43:43.497870 systemd-networkd[1361]: docker0: Link UP Mar 4 00:43:43.523131 dockerd[2226]: time="2026-03-04T00:43:43.523077561Z" level=info msg="Loading containers: done." Mar 4 00:43:43.533744 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1512894163-merged.mount: Deactivated successfully. Mar 4 00:43:43.545423 dockerd[2226]: time="2026-03-04T00:43:43.545361859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 00:43:43.545512 dockerd[2226]: time="2026-03-04T00:43:43.545496499Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 00:43:43.545643 dockerd[2226]: time="2026-03-04T00:43:43.545626539Z" level=info msg="Daemon has completed initialization" Mar 4 00:43:43.616715 dockerd[2226]: time="2026-03-04T00:43:43.616209035Z" level=info msg="API listen on /run/docker.sock" Mar 4 00:43:43.616847 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 00:43:44.018518 containerd[1721]: time="2026-03-04T00:43:44.018221515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 00:43:44.654168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 00:43:44.661310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:43:44.772657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:43:44.784573 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:43:44.911895 kubelet[2369]: E0304 00:43:44.911764 2369 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:43:44.914810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:43:44.914996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:43:45.325272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595149627.mount: Deactivated successfully. Mar 4 00:43:46.575784 containerd[1721]: time="2026-03-04T00:43:46.575726874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:46.578443 containerd[1721]: time="2026-03-04T00:43:46.578225113Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 4 00:43:46.582165 containerd[1721]: time="2026-03-04T00:43:46.581698473Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:46.587956 containerd[1721]: time="2026-03-04T00:43:46.587919233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:46.588987 containerd[1721]: time="2026-03-04T00:43:46.588951352Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.570691197s" Mar 4 00:43:46.589052 containerd[1721]: time="2026-03-04T00:43:46.588988752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 4 00:43:46.590229 containerd[1721]: time="2026-03-04T00:43:46.590200872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 00:43:47.800776 containerd[1721]: time="2026-03-04T00:43:47.800725878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:47.804347 containerd[1721]: time="2026-03-04T00:43:47.804307640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 4 00:43:47.808581 containerd[1721]: time="2026-03-04T00:43:47.808203043Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:47.816496 containerd[1721]: time="2026-03-04T00:43:47.816445849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:47.817637 containerd[1721]: time="2026-03-04T00:43:47.817599690Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.227364618s" Mar 4 00:43:47.817750 containerd[1721]: time="2026-03-04T00:43:47.817733210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 4 00:43:47.818489 containerd[1721]: time="2026-03-04T00:43:47.818413730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 00:43:48.963669 containerd[1721]: time="2026-03-04T00:43:48.962623415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:48.965963 containerd[1721]: time="2026-03-04T00:43:48.965935337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 4 00:43:48.970422 containerd[1721]: time="2026-03-04T00:43:48.970395140Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:48.974712 containerd[1721]: time="2026-03-04T00:43:48.974675903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:48.975974 containerd[1721]: time="2026-03-04T00:43:48.975944184Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.157487734s" Mar 4 00:43:48.975974 containerd[1721]: time="2026-03-04T00:43:48.975974344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 4 00:43:48.976389 containerd[1721]: time="2026-03-04T00:43:48.976367105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 00:43:50.399416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821266658.mount: Deactivated successfully. Mar 4 00:43:50.737126 containerd[1721]: time="2026-03-04T00:43:50.737074383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:50.739617 containerd[1721]: time="2026-03-04T00:43:50.739582584Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 4 00:43:50.742420 containerd[1721]: time="2026-03-04T00:43:50.742374746Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:50.747392 containerd[1721]: time="2026-03-04T00:43:50.747343270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:50.748496 containerd[1721]: time="2026-03-04T00:43:50.747927350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.771530365s" Mar 4 00:43:50.748496 containerd[1721]: time="2026-03-04T00:43:50.747967510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 4 00:43:50.748743 containerd[1721]: time="2026-03-04T00:43:50.748723831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 00:43:51.419934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903945496.mount: Deactivated successfully. Mar 4 00:43:52.380581 containerd[1721]: time="2026-03-04T00:43:52.380536415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.383362 containerd[1721]: time="2026-03-04T00:43:52.383336095Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 4 00:43:52.386828 containerd[1721]: time="2026-03-04T00:43:52.386803576Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.394383 containerd[1721]: time="2026-03-04T00:43:52.394342498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.395260 containerd[1721]: time="2026-03-04T00:43:52.395149939Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.646344228s" Mar 4 00:43:52.395260 containerd[1721]: time="2026-03-04T00:43:52.395183219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 4 00:43:52.396015 containerd[1721]: time="2026-03-04T00:43:52.395840299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 00:43:52.965095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603452497.mount: Deactivated successfully. Mar 4 00:43:52.983471 containerd[1721]: time="2026-03-04T00:43:52.983423300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.986068 containerd[1721]: time="2026-03-04T00:43:52.985890901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 4 00:43:52.990128 containerd[1721]: time="2026-03-04T00:43:52.989235742Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.994621 containerd[1721]: time="2026-03-04T00:43:52.994585543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:52.995534 containerd[1721]: time="2026-03-04T00:43:52.995497664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 599.371205ms" Mar 4 00:43:52.995690 containerd[1721]: time="2026-03-04T00:43:52.995673304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 4 00:43:52.996264 containerd[1721]: time="2026-03-04T00:43:52.996235504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 00:43:53.666328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952829734.mount: Deactivated successfully. Mar 4 00:43:54.897130 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 4 00:43:55.154187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 00:43:55.161541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:43:55.265667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:43:55.269742 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 00:43:55.401785 kubelet[2579]: E0304 00:43:55.401735 2579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 00:43:55.404985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 00:43:55.405260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 00:43:55.748217 containerd[1721]: time="2026-03-04T00:43:55.747830819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:55.750656 containerd[1721]: time="2026-03-04T00:43:55.750434380Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 4 00:43:55.755129 containerd[1721]: time="2026-03-04T00:43:55.753883021Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:55.758739 containerd[1721]: time="2026-03-04T00:43:55.758694022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:43:55.759897 containerd[1721]: time="2026-03-04T00:43:55.759781382Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.763512118s" Mar 4 00:43:55.759897 containerd[1721]: time="2026-03-04T00:43:55.759814342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 4 00:43:58.336697 update_engine[1704]: I20260304 00:43:58.336133 1704 update_attempter.cc:509] Updating boot flags... Mar 4 00:43:58.409118 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2618) Mar 4 00:43:58.529096 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2619) Mar 4 00:44:02.139342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:02.149581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:44:02.176227 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-9.scope)... Mar 4 00:44:02.176243 systemd[1]: Reloading... Mar 4 00:44:02.284126 zram_generator::config[2722]: No configuration found. Mar 4 00:44:02.383434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:44:02.463304 systemd[1]: Reloading finished in 286 ms. Mar 4 00:44:02.512893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:02.514371 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:44:02.518181 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 00:44:02.518371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:02.524292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:44:02.669455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:02.674084 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:44:02.823560 kubelet[2788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:44:02.823560 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:44:02.823560 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:44:02.823560 kubelet[2788]: I0304 00:44:02.822181 2788 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:44:03.335174 kubelet[2788]: I0304 00:44:03.335135 2788 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 00:44:03.335174 kubelet[2788]: I0304 00:44:03.335164 2788 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:44:03.335394 kubelet[2788]: I0304 00:44:03.335378 2788 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:44:03.354818 kubelet[2788]: E0304 00:44:03.354775 2788 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:44:03.355264 kubelet[2788]: I0304 00:44:03.355146 2788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:44:03.360954 kubelet[2788]: E0304 00:44:03.360896 2788 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:44:03.361337 kubelet[2788]: I0304 00:44:03.361077 2788 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 00:44:03.364222 kubelet[2788]: I0304 00:44:03.364204 2788 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 00:44:03.364518 kubelet[2788]: I0304 00:44:03.364494 2788 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:44:03.364830 kubelet[2788]: I0304 00:44:03.364576 2788 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d3c3414975","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 00:44:03.366131 kubelet[2788]: I0304 00:44:03.364962 2788 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:44:03.366131 kubelet[2788]: I0304 00:44:03.364980 2788 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 00:44:03.366131 kubelet[2788]: I0304 00:44:03.365175 2788 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:44:03.372085 kubelet[2788]: I0304 00:44:03.371952 2788 kubelet.go:480] "Attempting to sync node with API server" Mar 4 00:44:03.372085 kubelet[2788]: I0304 00:44:03.371996 2788 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:44:03.372085 kubelet[2788]: I0304 00:44:03.372022 2788 kubelet.go:386] "Adding apiserver pod source" Mar 4 00:44:03.373487 kubelet[2788]: I0304 00:44:03.373162 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:44:03.375692 kubelet[2788]: E0304 00:44:03.375662 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d3c3414975&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:44:03.377263 kubelet[2788]: E0304 00:44:03.376016 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:44:03.377360 kubelet[2788]: I0304 00:44:03.377338 2788 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:44:03.377910 kubelet[2788]: I0304 00:44:03.377889 2788 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:44:03.377962 kubelet[2788]: W0304 00:44:03.377952 2788 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 00:44:03.380063 kubelet[2788]: I0304 00:44:03.380033 2788 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 00:44:03.380147 kubelet[2788]: I0304 00:44:03.380078 2788 server.go:1289] "Started kubelet" Mar 4 00:44:03.382316 kubelet[2788]: I0304 00:44:03.382262 2788 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:44:03.383154 kubelet[2788]: I0304 00:44:03.383138 2788 server.go:317] "Adding debug handlers to kubelet server" Mar 4 00:44:03.383611 kubelet[2788]: I0304 00:44:03.383558 2788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:44:03.383881 kubelet[2788]: I0304 00:44:03.383855 2788 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:44:03.386333 kubelet[2788]: E0304 00:44:03.383974 2788 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-d3c3414975.18997cb494d6a24a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-d3c3414975,UID:ci-4081.3.6-n-d3c3414975,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-d3c3414975,},FirstTimestamp:2026-03-04 00:44:03.380052554 +0000 UTC m=+0.702459433,LastTimestamp:2026-03-04 00:44:03.380052554 +0000 UTC m=+0.702459433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-d3c3414975,}" Mar 4 00:44:03.386453 kubelet[2788]: I0304 00:44:03.386340 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:44:03.388848 kubelet[2788]: E0304 00:44:03.388569 2788 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 00:44:03.388848 kubelet[2788]: I0304 00:44:03.388704 2788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:44:03.391652 kubelet[2788]: E0304 00:44:03.391624 2788 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d3c3414975\" not found" Mar 4 00:44:03.391768 kubelet[2788]: I0304 00:44:03.391758 2788 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 00:44:03.392191 kubelet[2788]: I0304 00:44:03.392042 2788 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 00:44:03.392191 kubelet[2788]: I0304 00:44:03.392101 2788 reconciler.go:26] "Reconciler: start to sync state" Mar 4 00:44:03.393011 kubelet[2788]: E0304 00:44:03.392867 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d3c3414975?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" Mar 4 00:44:03.393011 kubelet[2788]: E0304 00:44:03.392971 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:44:03.393335 kubelet[2788]: I0304 00:44:03.393317 2788 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:44:03.393519 kubelet[2788]: I0304 00:44:03.393501 2788 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:44:03.394837 kubelet[2788]: I0304 00:44:03.394812 2788 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:44:03.423638 kubelet[2788]: I0304 00:44:03.423508 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 00:44:03.424462 kubelet[2788]: I0304 00:44:03.424446 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 00:44:03.424717 kubelet[2788]: I0304 00:44:03.424533 2788 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 00:44:03.424717 kubelet[2788]: I0304 00:44:03.424557 2788 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:44:03.424717 kubelet[2788]: I0304 00:44:03.424568 2788 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 00:44:03.424717 kubelet[2788]: E0304 00:44:03.424605 2788 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:44:03.431795 kubelet[2788]: E0304 00:44:03.431744 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:44:03.491880 kubelet[2788]: E0304 00:44:03.491840 2788 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d3c3414975\" not found" Mar 4 00:44:03.513411 kubelet[2788]: I0304 00:44:03.513160 2788 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:44:03.513411 kubelet[2788]: I0304 00:44:03.513174 2788 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:44:03.513411 kubelet[2788]: I0304 00:44:03.513191 2788 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:44:03.518313 kubelet[2788]: I0304 00:44:03.518293 2788 policy_none.go:49] "None policy: Start" Mar 4 00:44:03.518631 kubelet[2788]: I0304 00:44:03.518410 2788 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 00:44:03.518631 kubelet[2788]: I0304 00:44:03.518427 2788 state_mem.go:35] "Initializing new in-memory state store" Mar 4 00:44:03.525256 kubelet[2788]: E0304 00:44:03.525232 2788 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 00:44:03.526613 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 00:44:03.542514 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 00:44:03.555558 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 00:44:03.556991 kubelet[2788]: E0304 00:44:03.556969 2788 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:44:03.557182 kubelet[2788]: I0304 00:44:03.557168 2788 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:44:03.557212 kubelet[2788]: I0304 00:44:03.557183 2788 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:44:03.557694 kubelet[2788]: I0304 00:44:03.557677 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:44:03.559402 kubelet[2788]: E0304 00:44:03.559311 2788 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:44:03.559402 kubelet[2788]: E0304 00:44:03.559349 2788 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-d3c3414975\" not found" Mar 4 00:44:03.593601 kubelet[2788]: E0304 00:44:03.593493 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d3c3414975?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" Mar 4 00:44:03.658944 kubelet[2788]: I0304 00:44:03.658903 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.659368 kubelet[2788]: E0304 00:44:03.659338 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.739071 systemd[1]: Created slice kubepods-burstable-pod5e2d9189fe234f3dceb9985b5bdb96ea.slice - libcontainer container kubepods-burstable-pod5e2d9189fe234f3dceb9985b5bdb96ea.slice. Mar 4 00:44:03.747320 kubelet[2788]: E0304 00:44:03.747288 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.751933 systemd[1]: Created slice kubepods-burstable-podd3988665ef1dfd86e027dc2f97c84ef9.slice - libcontainer container kubepods-burstable-podd3988665ef1dfd86e027dc2f97c84ef9.slice. Mar 4 00:44:03.759335 kubelet[2788]: E0304 00:44:03.759312 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.762595 systemd[1]: Created slice kubepods-burstable-podae40d012d366ed0abb3bd190f837a9c4.slice - libcontainer container kubepods-burstable-podae40d012d366ed0abb3bd190f837a9c4.slice. Mar 4 00:44:03.764248 kubelet[2788]: E0304 00:44:03.764218 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.793868 kubelet[2788]: I0304 00:44:03.793841 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.793937 kubelet[2788]: I0304 00:44:03.793877 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.793937 kubelet[2788]: I0304 00:44:03.793902 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.793937 kubelet[2788]: I0304 00:44:03.793919 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.794010 kubelet[2788]: I0304 00:44:03.793937 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e2d9189fe234f3dceb9985b5bdb96ea-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d3c3414975\" (UID: \"5e2d9189fe234f3dceb9985b5bdb96ea\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.794010 kubelet[2788]: I0304 00:44:03.793951 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.794010 kubelet[2788]: I0304 00:44:03.793971 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.794010 kubelet[2788]: I0304 00:44:03.793988 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.794010 kubelet[2788]: I0304 00:44:03.794005 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.861789 kubelet[2788]: I0304 00:44:03.860976 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.862170 kubelet[2788]: E0304 00:44:03.862142 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:03.994240 kubelet[2788]: E0304 00:44:03.994194 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d3c3414975?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" Mar 4 00:44:04.049275 containerd[1721]: time="2026-03-04T00:44:04.049185403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d3c3414975,Uid:5e2d9189fe234f3dceb9985b5bdb96ea,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:04.060818 containerd[1721]: time="2026-03-04T00:44:04.060780331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d3c3414975,Uid:d3988665ef1dfd86e027dc2f97c84ef9,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:04.065805 containerd[1721]: time="2026-03-04T00:44:04.065653654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d3c3414975,Uid:ae40d012d366ed0abb3bd190f837a9c4,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:04.264049 kubelet[2788]: I0304 00:44:04.264023 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:04.264448 kubelet[2788]: E0304 00:44:04.264423 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:04.376614 kubelet[2788]: E0304 00:44:04.376578 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 00:44:04.465232 kubelet[2788]: E0304 00:44:04.465187 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 00:44:04.603968 kubelet[2788]: E0304 00:44:04.603670 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 00:44:04.651877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142562434.mount: Deactivated successfully. Mar 4 00:44:04.682246 containerd[1721]: time="2026-03-04T00:44:04.682193428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:44:04.684935 containerd[1721]: time="2026-03-04T00:44:04.684895229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 4 00:44:04.687807 containerd[1721]: time="2026-03-04T00:44:04.687771471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:44:04.691337 containerd[1721]: time="2026-03-04T00:44:04.690584673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:44:04.693674 containerd[1721]: time="2026-03-04T00:44:04.693640155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:44:04.697125 containerd[1721]: time="2026-03-04T00:44:04.696545397Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:44:04.698698 containerd[1721]: time="2026-03-04T00:44:04.698636679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 00:44:04.704212 containerd[1721]: time="2026-03-04T00:44:04.704172522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 00:44:04.706135 containerd[1721]: time="2026-03-04T00:44:04.704874643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.160669ms" Mar 4 00:44:04.706135 containerd[1721]: time="2026-03-04T00:44:04.705928203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 656.6648ms" Mar 4 00:44:04.706953 containerd[1721]: time="2026-03-04T00:44:04.706923484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.070633ms" Mar 4 00:44:04.794857 kubelet[2788]: E0304 00:44:04.794815 2788 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d3c3414975?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" Mar 4 00:44:04.904332 kubelet[2788]: E0304 00:44:04.903138 2788 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d3c3414975&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 00:44:05.067172 kubelet[2788]: I0304 00:44:05.066958 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:05.067452 kubelet[2788]: E0304 00:44:05.067278 2788 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:05.374540 containerd[1721]: time="2026-03-04T00:44:05.374087732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:05.374540 containerd[1721]: time="2026-03-04T00:44:05.374266892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:05.374540 containerd[1721]: time="2026-03-04T00:44:05.374289332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.375321 containerd[1721]: time="2026-03-04T00:44:05.374448852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.378250 containerd[1721]: time="2026-03-04T00:44:05.378075534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:05.378250 containerd[1721]: time="2026-03-04T00:44:05.378158015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:05.378250 containerd[1721]: time="2026-03-04T00:44:05.378174935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.378800 containerd[1721]: time="2026-03-04T00:44:05.378689775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:05.379467 containerd[1721]: time="2026-03-04T00:44:05.378785815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:05.379467 containerd[1721]: time="2026-03-04T00:44:05.378801455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.379467 containerd[1721]: time="2026-03-04T00:44:05.379383295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.379764 containerd[1721]: time="2026-03-04T00:44:05.379548775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:05.405291 systemd[1]: Started cri-containerd-0f5138f93e1d887e1be3a20e6de21d7cb95dd8a6721403aacc52283f5c66e92c.scope - libcontainer container 0f5138f93e1d887e1be3a20e6de21d7cb95dd8a6721403aacc52283f5c66e92c. Mar 4 00:44:05.410885 systemd[1]: Started cri-containerd-7781538dde9b09b3e8f1202510355bdb9b2e8b69a0fd2958b894f69f1698ced2.scope - libcontainer container 7781538dde9b09b3e8f1202510355bdb9b2e8b69a0fd2958b894f69f1698ced2. Mar 4 00:44:05.411872 systemd[1]: Started cri-containerd-a1e091a2b878d6a5d9250c1a65187db39f345d1ed10171bc0ed49e27679856ba.scope - libcontainer container a1e091a2b878d6a5d9250c1a65187db39f345d1ed10171bc0ed49e27679856ba. Mar 4 00:44:05.461579 containerd[1721]: time="2026-03-04T00:44:05.461542630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d3c3414975,Uid:ae40d012d366ed0abb3bd190f837a9c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5138f93e1d887e1be3a20e6de21d7cb95dd8a6721403aacc52283f5c66e92c\"" Mar 4 00:44:05.465070 containerd[1721]: time="2026-03-04T00:44:05.463923912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d3c3414975,Uid:5e2d9189fe234f3dceb9985b5bdb96ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7781538dde9b09b3e8f1202510355bdb9b2e8b69a0fd2958b894f69f1698ced2\"" Mar 4 00:44:05.469310 containerd[1721]: time="2026-03-04T00:44:05.469279956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d3c3414975,Uid:d3988665ef1dfd86e027dc2f97c84ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e091a2b878d6a5d9250c1a65187db39f345d1ed10171bc0ed49e27679856ba\"" Mar 4 00:44:05.474031 containerd[1721]: time="2026-03-04T00:44:05.473995119Z" level=info msg="CreateContainer within sandbox \"0f5138f93e1d887e1be3a20e6de21d7cb95dd8a6721403aacc52283f5c66e92c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 00:44:05.479944 containerd[1721]: time="2026-03-04T00:44:05.479821483Z" level=info msg="CreateContainer within sandbox \"7781538dde9b09b3e8f1202510355bdb9b2e8b69a0fd2958b894f69f1698ced2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 00:44:05.485284 containerd[1721]: time="2026-03-04T00:44:05.485242166Z" level=info msg="CreateContainer within sandbox \"a1e091a2b878d6a5d9250c1a65187db39f345d1ed10171bc0ed49e27679856ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 00:44:05.539017 containerd[1721]: time="2026-03-04T00:44:05.538969482Z" level=info msg="CreateContainer within sandbox \"0f5138f93e1d887e1be3a20e6de21d7cb95dd8a6721403aacc52283f5c66e92c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0669ef610f6d816f48e60760d99bcbf0fb8ce446338f77fdc97814e294d9a8b5\"" Mar 4 00:44:05.540764 containerd[1721]: time="2026-03-04T00:44:05.539631483Z" level=info msg="StartContainer for \"0669ef610f6d816f48e60760d99bcbf0fb8ce446338f77fdc97814e294d9a8b5\"" Mar 4 00:44:05.544818 kubelet[2788]: E0304 00:44:05.544783 2788 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 00:44:05.549461 containerd[1721]: time="2026-03-04T00:44:05.549322209Z" level=info msg="CreateContainer within sandbox \"a1e091a2b878d6a5d9250c1a65187db39f345d1ed10171bc0ed49e27679856ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36d29a832e1439295a3221d6b697703ae1eac5f04b656baa66f84bd2dae8d1a0\"" Mar 4 00:44:05.549866 containerd[1721]: time="2026-03-04T00:44:05.549844410Z" level=info msg="StartContainer for \"36d29a832e1439295a3221d6b697703ae1eac5f04b656baa66f84bd2dae8d1a0\"" Mar 4 00:44:05.557211 containerd[1721]: time="2026-03-04T00:44:05.557001735Z" level=info msg="CreateContainer within sandbox \"7781538dde9b09b3e8f1202510355bdb9b2e8b69a0fd2958b894f69f1698ced2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f65c4929441f73ab46f05547dc4ff7d1a108437823d19b67c18932a0cd92be11\"" Mar 4 00:44:05.558577 containerd[1721]: time="2026-03-04T00:44:05.557719735Z" level=info msg="StartContainer for \"f65c4929441f73ab46f05547dc4ff7d1a108437823d19b67c18932a0cd92be11\"" Mar 4 00:44:05.566175 systemd[1]: Started cri-containerd-0669ef610f6d816f48e60760d99bcbf0fb8ce446338f77fdc97814e294d9a8b5.scope - libcontainer container 0669ef610f6d816f48e60760d99bcbf0fb8ce446338f77fdc97814e294d9a8b5. Mar 4 00:44:05.594395 systemd[1]: Started cri-containerd-36d29a832e1439295a3221d6b697703ae1eac5f04b656baa66f84bd2dae8d1a0.scope - libcontainer container 36d29a832e1439295a3221d6b697703ae1eac5f04b656baa66f84bd2dae8d1a0. Mar 4 00:44:05.605311 systemd[1]: Started cri-containerd-f65c4929441f73ab46f05547dc4ff7d1a108437823d19b67c18932a0cd92be11.scope - libcontainer container f65c4929441f73ab46f05547dc4ff7d1a108437823d19b67c18932a0cd92be11. Mar 4 00:44:05.634517 containerd[1721]: time="2026-03-04T00:44:05.634246906Z" level=info msg="StartContainer for \"0669ef610f6d816f48e60760d99bcbf0fb8ce446338f77fdc97814e294d9a8b5\" returns successfully" Mar 4 00:44:05.648900 containerd[1721]: time="2026-03-04T00:44:05.648831316Z" level=info msg="StartContainer for \"36d29a832e1439295a3221d6b697703ae1eac5f04b656baa66f84bd2dae8d1a0\" returns successfully" Mar 4 00:44:05.694910 containerd[1721]: time="2026-03-04T00:44:05.694858627Z" level=info msg="StartContainer for \"f65c4929441f73ab46f05547dc4ff7d1a108437823d19b67c18932a0cd92be11\" returns successfully" Mar 4 00:44:06.440194 kubelet[2788]: E0304 00:44:06.440057 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:06.447783 kubelet[2788]: E0304 00:44:06.447757 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:06.449231 kubelet[2788]: E0304 00:44:06.449212 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:06.670704 kubelet[2788]: I0304 00:44:06.670420 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:07.452707 kubelet[2788]: E0304 00:44:07.452275 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:07.452707 kubelet[2788]: E0304 00:44:07.452592 2788 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.112332 kubelet[2788]: E0304 00:44:08.112295 2788 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-d3c3414975\" not found" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.240721 kubelet[2788]: I0304 00:44:08.239611 2788 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.240721 kubelet[2788]: E0304 00:44:08.239653 2788 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-d3c3414975\": node \"ci-4081.3.6-n-d3c3414975\" not found" Mar 4 00:44:08.293486 kubelet[2788]: I0304 00:44:08.293449 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.303065 kubelet[2788]: E0304 00:44:08.303009 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d3c3414975\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.303065 kubelet[2788]: I0304 00:44:08.303059 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.308492 kubelet[2788]: E0304 00:44:08.308283 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.308492 kubelet[2788]: I0304 00:44:08.308314 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.310155 kubelet[2788]: E0304 00:44:08.310131 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:08.379193 kubelet[2788]: I0304 00:44:08.378988 2788 apiserver.go:52] "Watching apiserver" Mar 4 00:44:08.393116 kubelet[2788]: I0304 00:44:08.393040 2788 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 00:44:10.237679 systemd[1]: Reloading requested from client PID 3070 ('systemctl') (unit session-9.scope)... Mar 4 00:44:10.237958 systemd[1]: Reloading... Mar 4 00:44:10.330138 zram_generator::config[3113]: No configuration found. Mar 4 00:44:10.417085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:44:10.426162 kubelet[2788]: I0304 00:44:10.424774 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:10.434234 kubelet[2788]: I0304 00:44:10.433386 2788 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:44:10.512213 systemd[1]: Reloading finished in 273 ms. Mar 4 00:44:10.545050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:44:10.556280 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 00:44:10.556642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:10.562557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:44:10.930066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 00:44:10.936246 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 00:44:10.979899 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:44:10.979899 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 00:44:10.979899 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 00:44:10.980283 kubelet[3174]: I0304 00:44:10.979932 3174 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 00:44:10.995048 kubelet[3174]: I0304 00:44:10.995015 3174 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 00:44:10.996126 kubelet[3174]: I0304 00:44:10.995197 3174 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 00:44:10.996126 kubelet[3174]: I0304 00:44:10.995595 3174 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 00:44:10.997221 kubelet[3174]: I0304 00:44:10.997204 3174 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 00:44:10.999623 kubelet[3174]: I0304 00:44:10.999596 3174 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 00:44:11.004087 kubelet[3174]: E0304 00:44:11.004061 3174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 00:44:11.004248 kubelet[3174]: I0304 00:44:11.004234 3174 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 00:44:11.007053 kubelet[3174]: I0304 00:44:11.007036 3174 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 00:44:11.007367 kubelet[3174]: I0304 00:44:11.007342 3174 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 00:44:11.008035 kubelet[3174]: I0304 00:44:11.007840 3174 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d3c3414975","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 00:44:11.008208 kubelet[3174]: I0304 00:44:11.008193 3174 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 00:44:11.008258 kubelet[3174]: I0304 00:44:11.008251 3174 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 00:44:11.008361 kubelet[3174]: I0304 00:44:11.008352 3174 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:44:11.008570 kubelet[3174]: I0304 00:44:11.008559 3174 kubelet.go:480] "Attempting to sync node with API server" Mar 4 00:44:11.011161 kubelet[3174]: I0304 00:44:11.011144 3174 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 00:44:11.011272 kubelet[3174]: I0304 00:44:11.011263 3174 kubelet.go:386] "Adding apiserver pod source" Mar 4 00:44:11.011332 kubelet[3174]: I0304 00:44:11.011324 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 00:44:11.014409 kubelet[3174]: I0304 00:44:11.014389 3174 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 00:44:11.015081 kubelet[3174]: I0304 00:44:11.015062 3174 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 00:44:11.023118 kubelet[3174]: I0304 00:44:11.019804 3174 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 00:44:11.023118 kubelet[3174]: I0304 00:44:11.019844 3174 server.go:1289] "Started kubelet" Mar 4 00:44:11.026733 kubelet[3174]: I0304 00:44:11.026702 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 00:44:11.039367 kubelet[3174]: I0304 00:44:11.039314 3174 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 00:44:11.041964 kubelet[3174]: I0304 00:44:11.041945 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 00:44:11.043193 kubelet[3174]: I0304 00:44:11.043178 3174 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 00:44:11.049543 kubelet[3174]: I0304 00:44:11.049520 3174 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 00:44:11.049776 kubelet[3174]: I0304 00:44:11.049765 3174 reconciler.go:26] "Reconciler: start to sync state" Mar 4 00:44:11.052252 kubelet[3174]: I0304 00:44:11.052223 3174 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 00:44:11.053315 kubelet[3174]: I0304 00:44:11.053296 3174 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 00:44:11.053414 kubelet[3174]: I0304 00:44:11.053405 3174 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 00:44:11.053482 kubelet[3174]: I0304 00:44:11.053474 3174 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 00:44:11.053529 kubelet[3174]: I0304 00:44:11.053522 3174 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 00:44:11.053615 kubelet[3174]: E0304 00:44:11.053600 3174 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 00:44:11.055947 kubelet[3174]: I0304 00:44:11.055762 3174 server.go:317] "Adding debug handlers to kubelet server" Mar 4 00:44:11.059404 kubelet[3174]: I0304 00:44:11.041326 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 00:44:11.059595 kubelet[3174]: I0304 00:44:11.059570 3174 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 00:44:11.063010 kubelet[3174]: I0304 00:44:11.062741 3174 factory.go:223] Registration of the systemd container factory successfully Mar 4 00:44:11.063010 kubelet[3174]: I0304 00:44:11.062841 3174 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 00:44:11.067148 kubelet[3174]: I0304 00:44:11.066597 3174 factory.go:223] Registration of the containerd container factory successfully Mar 4 00:44:11.068004 kubelet[3174]: E0304 00:44:11.067985 3174 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 00:44:11.108229 kubelet[3174]: I0304 00:44:11.108207 3174 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 00:44:11.108385 kubelet[3174]: I0304 00:44:11.108373 3174 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 00:44:11.108442 kubelet[3174]: I0304 00:44:11.108435 3174 state_mem.go:36] "Initialized new in-memory state store" Mar 4 00:44:11.108622 kubelet[3174]: I0304 00:44:11.108610 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 00:44:11.108685 kubelet[3174]: I0304 00:44:11.108665 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 00:44:11.108735 kubelet[3174]: I0304 00:44:11.108728 3174 policy_none.go:49] "None policy: Start" Mar 4 00:44:11.108788 kubelet[3174]: I0304 00:44:11.108781 3174 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 00:44:11.108838 kubelet[3174]: I0304 00:44:11.108831 3174 state_mem.go:35] "Initializing new in-memory state store" Mar 4 00:44:11.108983 kubelet[3174]: I0304 00:44:11.108973 3174 state_mem.go:75] "Updated machine memory state" Mar 4 00:44:11.113257 kubelet[3174]: E0304 00:44:11.112487 3174 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 00:44:11.113257 kubelet[3174]: I0304 00:44:11.112639 3174 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 00:44:11.113257 kubelet[3174]: I0304 00:44:11.112649 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 00:44:11.113257 kubelet[3174]: I0304 00:44:11.113168 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 00:44:11.116349 kubelet[3174]: E0304 00:44:11.116329 3174 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 00:44:11.154928 kubelet[3174]: I0304 00:44:11.154503 3174 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.154928 kubelet[3174]: I0304 00:44:11.154584 3174 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.154928 kubelet[3174]: I0304 00:44:11.154827 3174 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.165755 kubelet[3174]: I0304 00:44:11.165721 3174 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:44:11.165891 kubelet[3174]: E0304 00:44:11.165871 3174 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d3c3414975\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.166178 kubelet[3174]: I0304 00:44:11.165997 3174 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:44:11.166178 kubelet[3174]: I0304 00:44:11.166051 3174 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:44:11.224490 kubelet[3174]: I0304 00:44:11.224391 3174 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.235900 kubelet[3174]: I0304 00:44:11.235866 3174 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.236026 kubelet[3174]: I0304 00:44:11.235948 3174 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251122 kubelet[3174]: I0304 00:44:11.251085 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e2d9189fe234f3dceb9985b5bdb96ea-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d3c3414975\" (UID: \"5e2d9189fe234f3dceb9985b5bdb96ea\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251239 kubelet[3174]: I0304 00:44:11.251128 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251239 kubelet[3174]: I0304 00:44:11.251149 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251239 kubelet[3174]: I0304 00:44:11.251167 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251239 kubelet[3174]: I0304 00:44:11.251183 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251239 kubelet[3174]: I0304 00:44:11.251203 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3988665ef1dfd86e027dc2f97c84ef9-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d3c3414975\" (UID: \"d3988665ef1dfd86e027dc2f97c84ef9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251353 kubelet[3174]: I0304 00:44:11.251218 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251353 kubelet[3174]: I0304 00:44:11.251234 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:11.251353 kubelet[3174]: I0304 00:44:11.251251 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae40d012d366ed0abb3bd190f837a9c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d3c3414975\" (UID: \"ae40d012d366ed0abb3bd190f837a9c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:12.012819 kubelet[3174]: I0304 00:44:12.012561 3174 apiserver.go:52] "Watching apiserver" Mar 4 00:44:12.050143 kubelet[3174]: I0304 00:44:12.049997 3174 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 00:44:12.089681 kubelet[3174]: I0304 00:44:12.089638 3174 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:12.105136 kubelet[3174]: I0304 00:44:12.104114 3174 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 00:44:12.105136 kubelet[3174]: E0304 00:44:12.104171 3174 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d3c3414975\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" Mar 4 00:44:12.139481 kubelet[3174]: I0304 00:44:12.139326 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d3c3414975" podStartSLOduration=1.139282339 podStartE2EDuration="1.139282339s" podCreationTimestamp="2026-03-04 00:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:12.138464298 +0000 UTC m=+1.197636917" watchObservedRunningTime="2026-03-04 00:44:12.139282339 +0000 UTC m=+1.198454958" Mar 4 00:44:12.183754 kubelet[3174]: I0304 00:44:12.183005 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d3c3414975" podStartSLOduration=2.182988902 podStartE2EDuration="2.182988902s" podCreationTimestamp="2026-03-04 00:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:12.163992923 +0000 UTC m=+1.223165542" watchObservedRunningTime="2026-03-04 00:44:12.182988902 +0000 UTC m=+1.242161521" Mar 4 00:44:12.198781 kubelet[3174]: I0304 00:44:12.198181 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d3c3414975" podStartSLOduration=1.198166038 podStartE2EDuration="1.198166038s" podCreationTimestamp="2026-03-04 00:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:12.184247144 +0000 UTC m=+1.243419763" watchObservedRunningTime="2026-03-04 00:44:12.198166038 +0000 UTC m=+1.257338657" Mar 4 00:44:17.314641 kubelet[3174]: I0304 00:44:17.314603 3174 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 00:44:17.315676 kubelet[3174]: I0304 00:44:17.315125 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 00:44:17.315709 containerd[1721]: time="2026-03-04T00:44:17.314949863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 00:44:18.417486 systemd[1]: Created slice kubepods-besteffort-pod580a3ea0_ffad_45a5_9bc4_c227eb798ea1.slice - libcontainer container kubepods-besteffort-pod580a3ea0_ffad_45a5_9bc4_c227eb798ea1.slice. Mar 4 00:44:18.489342 kubelet[3174]: I0304 00:44:18.489304 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6dr\" (UniqueName: \"kubernetes.io/projected/580a3ea0-ffad-45a5-9bc4-c227eb798ea1-kube-api-access-5c6dr\") pod \"kube-proxy-27bwr\" (UID: \"580a3ea0-ffad-45a5-9bc4-c227eb798ea1\") " pod="kube-system/kube-proxy-27bwr" Mar 4 00:44:18.489342 kubelet[3174]: I0304 00:44:18.489344 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/580a3ea0-ffad-45a5-9bc4-c227eb798ea1-kube-proxy\") pod \"kube-proxy-27bwr\" (UID: \"580a3ea0-ffad-45a5-9bc4-c227eb798ea1\") " pod="kube-system/kube-proxy-27bwr" Mar 4 00:44:18.489720 kubelet[3174]: I0304 00:44:18.489364 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580a3ea0-ffad-45a5-9bc4-c227eb798ea1-xtables-lock\") pod \"kube-proxy-27bwr\" (UID: \"580a3ea0-ffad-45a5-9bc4-c227eb798ea1\") " pod="kube-system/kube-proxy-27bwr" Mar 4 00:44:18.489720 kubelet[3174]: I0304 00:44:18.489381 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580a3ea0-ffad-45a5-9bc4-c227eb798ea1-lib-modules\") pod \"kube-proxy-27bwr\" (UID: \"580a3ea0-ffad-45a5-9bc4-c227eb798ea1\") " pod="kube-system/kube-proxy-27bwr" Mar 4 00:44:18.543853 systemd[1]: Created slice kubepods-besteffort-pod2dd3bfb4_4602_4a21_8c33_877ea483fa03.slice - libcontainer container kubepods-besteffort-pod2dd3bfb4_4602_4a21_8c33_877ea483fa03.slice. Mar 4 00:44:18.590359 kubelet[3174]: I0304 00:44:18.590223 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6kd\" (UniqueName: \"kubernetes.io/projected/2dd3bfb4-4602-4a21-8c33-877ea483fa03-kube-api-access-zt6kd\") pod \"tigera-operator-6bf85f8dd-62l6n\" (UID: \"2dd3bfb4-4602-4a21-8c33-877ea483fa03\") " pod="tigera-operator/tigera-operator-6bf85f8dd-62l6n" Mar 4 00:44:18.590359 kubelet[3174]: I0304 00:44:18.590306 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2dd3bfb4-4602-4a21-8c33-877ea483fa03-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-62l6n\" (UID: \"2dd3bfb4-4602-4a21-8c33-877ea483fa03\") " pod="tigera-operator/tigera-operator-6bf85f8dd-62l6n" Mar 4 00:44:18.725623 containerd[1721]: time="2026-03-04T00:44:18.725505026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27bwr,Uid:580a3ea0-ffad-45a5-9bc4-c227eb798ea1,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:18.763960 containerd[1721]: time="2026-03-04T00:44:18.763733805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:18.763960 containerd[1721]: time="2026-03-04T00:44:18.763784205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:18.763960 containerd[1721]: time="2026-03-04T00:44:18.763813805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:18.763960 containerd[1721]: time="2026-03-04T00:44:18.763886205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:18.782263 systemd[1]: Started cri-containerd-11b2afb196f62d9c411b39a65c72248d223303cb2ef332c853e2bfd0e2fad3a5.scope - libcontainer container 11b2afb196f62d9c411b39a65c72248d223303cb2ef332c853e2bfd0e2fad3a5. Mar 4 00:44:18.810817 containerd[1721]: time="2026-03-04T00:44:18.810763229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27bwr,Uid:580a3ea0-ffad-45a5-9bc4-c227eb798ea1,Namespace:kube-system,Attempt:0,} returns sandbox id \"11b2afb196f62d9c411b39a65c72248d223303cb2ef332c853e2bfd0e2fad3a5\"" Mar 4 00:44:18.820824 containerd[1721]: time="2026-03-04T00:44:18.820783874Z" level=info msg="CreateContainer within sandbox \"11b2afb196f62d9c411b39a65c72248d223303cb2ef332c853e2bfd0e2fad3a5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 00:44:18.848687 containerd[1721]: time="2026-03-04T00:44:18.848648809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-62l6n,Uid:2dd3bfb4-4602-4a21-8c33-877ea483fa03,Namespace:tigera-operator,Attempt:0,}" Mar 4 00:44:18.860302 containerd[1721]: time="2026-03-04T00:44:18.860256655Z" level=info msg="CreateContainer within sandbox \"11b2afb196f62d9c411b39a65c72248d223303cb2ef332c853e2bfd0e2fad3a5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9614ec5eb92ea0c3695f13d2d4197e714482390e679cb679cc22866cdd18fdb\"" Mar 4 00:44:18.861902 containerd[1721]: time="2026-03-04T00:44:18.861868936Z" level=info msg="StartContainer for \"e9614ec5eb92ea0c3695f13d2d4197e714482390e679cb679cc22866cdd18fdb\"" Mar 4 00:44:18.892393 systemd[1]: Started cri-containerd-e9614ec5eb92ea0c3695f13d2d4197e714482390e679cb679cc22866cdd18fdb.scope - libcontainer container e9614ec5eb92ea0c3695f13d2d4197e714482390e679cb679cc22866cdd18fdb. Mar 4 00:44:18.899138 containerd[1721]: time="2026-03-04T00:44:18.898718234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:18.899138 containerd[1721]: time="2026-03-04T00:44:18.898791554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:18.899138 containerd[1721]: time="2026-03-04T00:44:18.898805554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:18.899138 containerd[1721]: time="2026-03-04T00:44:18.898882875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:18.921266 systemd[1]: Started cri-containerd-59f093999732c5289669b85c96596b1a71854ac5ea8be69acf1a74e6603cfa05.scope - libcontainer container 59f093999732c5289669b85c96596b1a71854ac5ea8be69acf1a74e6603cfa05. Mar 4 00:44:18.931177 containerd[1721]: time="2026-03-04T00:44:18.930587131Z" level=info msg="StartContainer for \"e9614ec5eb92ea0c3695f13d2d4197e714482390e679cb679cc22866cdd18fdb\" returns successfully" Mar 4 00:44:18.956622 containerd[1721]: time="2026-03-04T00:44:18.956429784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-62l6n,Uid:2dd3bfb4-4602-4a21-8c33-877ea483fa03,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59f093999732c5289669b85c96596b1a71854ac5ea8be69acf1a74e6603cfa05\"" Mar 4 00:44:18.960329 containerd[1721]: time="2026-03-04T00:44:18.959041705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 00:44:19.138006 kubelet[3174]: I0304 00:44:19.137859 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27bwr" podStartSLOduration=1.137841757 podStartE2EDuration="1.137841757s" podCreationTimestamp="2026-03-04 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:19.115265385 +0000 UTC m=+8.174437964" watchObservedRunningTime="2026-03-04 00:44:19.137841757 +0000 UTC m=+8.197014376" Mar 4 00:44:20.944824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146589840.mount: Deactivated successfully. Mar 4 00:44:21.614802 containerd[1721]: time="2026-03-04T00:44:21.614753306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:21.617657 containerd[1721]: time="2026-03-04T00:44:21.617627508Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 4 00:44:21.620128 containerd[1721]: time="2026-03-04T00:44:21.620065509Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:21.625123 containerd[1721]: time="2026-03-04T00:44:21.624892671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:21.625653 containerd[1721]: time="2026-03-04T00:44:21.625623952Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.665071286s" Mar 4 00:44:21.625706 containerd[1721]: time="2026-03-04T00:44:21.625657632Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 4 00:44:21.635588 containerd[1721]: time="2026-03-04T00:44:21.635482477Z" level=info msg="CreateContainer within sandbox \"59f093999732c5289669b85c96596b1a71854ac5ea8be69acf1a74e6603cfa05\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 00:44:21.654869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989267294.mount: Deactivated successfully. Mar 4 00:44:21.665338 containerd[1721]: time="2026-03-04T00:44:21.665244892Z" level=info msg="CreateContainer within sandbox \"59f093999732c5289669b85c96596b1a71854ac5ea8be69acf1a74e6603cfa05\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5c5572fbe038abe56d9c35c6fb2e88d45b2ba53c613ca847fa97cd7be47331b3\"" Mar 4 00:44:21.667130 containerd[1721]: time="2026-03-04T00:44:21.665726172Z" level=info msg="StartContainer for \"5c5572fbe038abe56d9c35c6fb2e88d45b2ba53c613ca847fa97cd7be47331b3\"" Mar 4 00:44:21.694311 systemd[1]: Started cri-containerd-5c5572fbe038abe56d9c35c6fb2e88d45b2ba53c613ca847fa97cd7be47331b3.scope - libcontainer container 5c5572fbe038abe56d9c35c6fb2e88d45b2ba53c613ca847fa97cd7be47331b3. Mar 4 00:44:21.723386 containerd[1721]: time="2026-03-04T00:44:21.723334962Z" level=info msg="StartContainer for \"5c5572fbe038abe56d9c35c6fb2e88d45b2ba53c613ca847fa97cd7be47331b3\" returns successfully" Mar 4 00:44:22.123036 kubelet[3174]: I0304 00:44:22.122978 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-62l6n" podStartSLOduration=1.454068 podStartE2EDuration="4.122961887s" podCreationTimestamp="2026-03-04 00:44:18 +0000 UTC" firstStartedPulling="2026-03-04 00:44:18.957998785 +0000 UTC m=+8.017171404" lastFinishedPulling="2026-03-04 00:44:21.626892672 +0000 UTC m=+10.686065291" observedRunningTime="2026-03-04 00:44:22.122821246 +0000 UTC m=+11.181993865" watchObservedRunningTime="2026-03-04 00:44:22.122961887 +0000 UTC m=+11.182134506" Mar 4 00:44:27.605515 sudo[2211]: pam_unix(sudo:session): session closed for user root Mar 4 00:44:27.684015 sshd[2208]: pam_unix(sshd:session): session closed for user core Mar 4 00:44:27.689507 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:45922.service: Deactivated successfully. Mar 4 00:44:27.694186 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 00:44:27.694395 systemd[1]: session-9.scope: Consumed 7.557s CPU time, 155.3M memory peak, 0B memory swap peak. Mar 4 00:44:27.697611 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Mar 4 00:44:27.699732 systemd-logind[1698]: Removed session 9. Mar 4 00:44:32.864394 systemd[1]: Created slice kubepods-besteffort-podf9384b9d_2234_4deb_b7df_fe914ac8391d.slice - libcontainer container kubepods-besteffort-podf9384b9d_2234_4deb_b7df_fe914ac8391d.slice. Mar 4 00:44:32.874127 kubelet[3174]: I0304 00:44:32.872991 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9384b9d-2234-4deb-b7df-fe914ac8391d-typha-certs\") pod \"calico-typha-59b7d885bd-m65q7\" (UID: \"f9384b9d-2234-4deb-b7df-fe914ac8391d\") " pod="calico-system/calico-typha-59b7d885bd-m65q7" Mar 4 00:44:32.874127 kubelet[3174]: I0304 00:44:32.873031 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9384b9d-2234-4deb-b7df-fe914ac8391d-tigera-ca-bundle\") pod \"calico-typha-59b7d885bd-m65q7\" (UID: \"f9384b9d-2234-4deb-b7df-fe914ac8391d\") " pod="calico-system/calico-typha-59b7d885bd-m65q7" Mar 4 00:44:32.874127 kubelet[3174]: I0304 00:44:32.873052 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lsc\" (UniqueName: \"kubernetes.io/projected/f9384b9d-2234-4deb-b7df-fe914ac8391d-kube-api-access-64lsc\") pod \"calico-typha-59b7d885bd-m65q7\" (UID: \"f9384b9d-2234-4deb-b7df-fe914ac8391d\") " pod="calico-system/calico-typha-59b7d885bd-m65q7" Mar 4 00:44:32.973432 kubelet[3174]: I0304 00:44:32.973399 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-tigera-ca-bundle\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973432 kubelet[3174]: I0304 00:44:32.973432 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-xtables-lock\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973587 kubelet[3174]: I0304 00:44:32.973449 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-node-certs\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973587 kubelet[3174]: I0304 00:44:32.973479 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-bpffs\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973587 kubelet[3174]: I0304 00:44:32.973494 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-cni-net-dir\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973587 kubelet[3174]: I0304 00:44:32.973507 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-flexvol-driver-host\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973587 kubelet[3174]: I0304 00:44:32.973523 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-policysync\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973699 kubelet[3174]: I0304 00:44:32.973535 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-sys-fs\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973699 kubelet[3174]: I0304 00:44:32.973549 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-var-run-calico\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.973699 kubelet[3174]: I0304 00:44:32.973583 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878j4\" (UniqueName: \"kubernetes.io/projected/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-kube-api-access-878j4\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975924 kubelet[3174]: I0304 00:44:32.973598 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-cni-bin-dir\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975924 kubelet[3174]: I0304 00:44:32.973782 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-cni-log-dir\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975924 kubelet[3174]: I0304 00:44:32.973800 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-var-lib-calico\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975924 kubelet[3174]: I0304 00:44:32.973817 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-lib-modules\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975924 kubelet[3174]: I0304 00:44:32.973832 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5b9debdd-b7ab-41bf-bb7d-7d5f05238df4-nodeproc\") pod \"calico-node-9bcdf\" (UID: \"5b9debdd-b7ab-41bf-bb7d-7d5f05238df4\") " pod="calico-system/calico-node-9bcdf" Mar 4 00:44:32.975217 systemd[1]: Created slice kubepods-besteffort-pod5b9debdd_b7ab_41bf_bb7d_7d5f05238df4.slice - libcontainer container kubepods-besteffort-pod5b9debdd_b7ab_41bf_bb7d_7d5f05238df4.slice. Mar 4 00:44:33.079009 kubelet[3174]: E0304 00:44:33.078588 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.079009 kubelet[3174]: W0304 00:44:33.078613 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.079009 kubelet[3174]: E0304 00:44:33.078636 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.079009 kubelet[3174]: E0304 00:44:33.078823 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.079009 kubelet[3174]: W0304 00:44:33.078830 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.079009 kubelet[3174]: E0304 00:44:33.078839 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.079009 kubelet[3174]: E0304 00:44:33.079002 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.079009 kubelet[3174]: W0304 00:44:33.079010 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.079431 kubelet[3174]: E0304 00:44:33.079042 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.079431 kubelet[3174]: E0304 00:44:33.079378 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.079431 kubelet[3174]: W0304 00:44:33.079388 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.079431 kubelet[3174]: E0304 00:44:33.079398 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.080494 kubelet[3174]: E0304 00:44:33.079601 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.080494 kubelet[3174]: W0304 00:44:33.080169 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.080494 kubelet[3174]: E0304 00:44:33.080188 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.082797 kubelet[3174]: E0304 00:44:33.080516 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.082797 kubelet[3174]: W0304 00:44:33.080526 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.082797 kubelet[3174]: E0304 00:44:33.080537 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.082797 kubelet[3174]: E0304 00:44:33.081017 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.082797 kubelet[3174]: W0304 00:44:33.081029 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.082797 kubelet[3174]: E0304 00:44:33.081040 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.085051 kubelet[3174]: E0304 00:44:33.084720 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.085051 kubelet[3174]: W0304 00:44:33.084738 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.085051 kubelet[3174]: E0304 00:44:33.084750 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.086475 kubelet[3174]: E0304 00:44:33.085790 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:33.087310 kubelet[3174]: E0304 00:44:33.086799 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.087310 kubelet[3174]: W0304 00:44:33.086818 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.087310 kubelet[3174]: E0304 00:44:33.086833 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.103584 kubelet[3174]: E0304 00:44:33.103550 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.104897 kubelet[3174]: W0304 00:44:33.104134 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.104897 kubelet[3174]: E0304 00:44:33.104168 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.169428 containerd[1721]: time="2026-03-04T00:44:33.169385559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b7d885bd-m65q7,Uid:f9384b9d-2234-4deb-b7df-fe914ac8391d,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:33.173992 kubelet[3174]: E0304 00:44:33.173968 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.174195 kubelet[3174]: W0304 00:44:33.174039 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.174195 kubelet[3174]: E0304 00:44:33.174060 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.174526 kubelet[3174]: E0304 00:44:33.174408 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.174526 kubelet[3174]: W0304 00:44:33.174420 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.174526 kubelet[3174]: E0304 00:44:33.174462 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.174826 kubelet[3174]: E0304 00:44:33.174748 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.174826 kubelet[3174]: W0304 00:44:33.174761 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.174826 kubelet[3174]: E0304 00:44:33.174778 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.175086 kubelet[3174]: E0304 00:44:33.175043 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.175086 kubelet[3174]: W0304 00:44:33.175053 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.175086 kubelet[3174]: E0304 00:44:33.175063 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.175477 kubelet[3174]: E0304 00:44:33.175385 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.175477 kubelet[3174]: W0304 00:44:33.175397 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.175477 kubelet[3174]: E0304 00:44:33.175407 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.175745 kubelet[3174]: E0304 00:44:33.175648 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.175745 kubelet[3174]: W0304 00:44:33.175658 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.175745 kubelet[3174]: E0304 00:44:33.175667 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.175957 kubelet[3174]: E0304 00:44:33.175894 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.175957 kubelet[3174]: W0304 00:44:33.175904 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.175957 kubelet[3174]: E0304 00:44:33.175913 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.176423 kubelet[3174]: E0304 00:44:33.176303 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.176423 kubelet[3174]: W0304 00:44:33.176314 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.176423 kubelet[3174]: E0304 00:44:33.176329 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.176692 kubelet[3174]: E0304 00:44:33.176631 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.176692 kubelet[3174]: W0304 00:44:33.176643 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.176692 kubelet[3174]: E0304 00:44:33.176652 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.177003 kubelet[3174]: E0304 00:44:33.176939 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.177003 kubelet[3174]: W0304 00:44:33.176950 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.177003 kubelet[3174]: E0304 00:44:33.176961 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.177291 kubelet[3174]: E0304 00:44:33.177250 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.177291 kubelet[3174]: W0304 00:44:33.177261 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.177291 kubelet[3174]: E0304 00:44:33.177271 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.177654 kubelet[3174]: E0304 00:44:33.177544 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.177654 kubelet[3174]: W0304 00:44:33.177555 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.177654 kubelet[3174]: E0304 00:44:33.177565 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.177892 kubelet[3174]: E0304 00:44:33.177815 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.177892 kubelet[3174]: W0304 00:44:33.177827 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.177892 kubelet[3174]: E0304 00:44:33.177837 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.178218 kubelet[3174]: E0304 00:44:33.178163 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.178218 kubelet[3174]: W0304 00:44:33.178175 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.178218 kubelet[3174]: E0304 00:44:33.178185 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.178526 kubelet[3174]: E0304 00:44:33.178462 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.178526 kubelet[3174]: W0304 00:44:33.178472 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.178526 kubelet[3174]: E0304 00:44:33.178487 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.178756 kubelet[3174]: E0304 00:44:33.178746 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.178849 kubelet[3174]: W0304 00:44:33.178811 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.178849 kubelet[3174]: E0304 00:44:33.178826 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.179171 kubelet[3174]: E0304 00:44:33.179080 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.179171 kubelet[3174]: W0304 00:44:33.179090 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.179171 kubelet[3174]: E0304 00:44:33.179100 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.179379 kubelet[3174]: E0304 00:44:33.179340 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.179379 kubelet[3174]: W0304 00:44:33.179351 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.179379 kubelet[3174]: E0304 00:44:33.179360 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.179630 kubelet[3174]: E0304 00:44:33.179620 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.179713 kubelet[3174]: W0304 00:44:33.179702 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.179849 kubelet[3174]: E0304 00:44:33.179762 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.180162 kubelet[3174]: E0304 00:44:33.180150 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.180320 kubelet[3174]: W0304 00:44:33.180221 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.180320 kubelet[3174]: E0304 00:44:33.180236 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.180735 kubelet[3174]: E0304 00:44:33.180624 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.180735 kubelet[3174]: W0304 00:44:33.180645 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.180735 kubelet[3174]: E0304 00:44:33.180656 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.180735 kubelet[3174]: I0304 00:44:33.180680 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b70ced7-e20a-43ff-ab70-0b136d70674a-kubelet-dir\") pod \"csi-node-driver-lgcgd\" (UID: \"8b70ced7-e20a-43ff-ab70-0b136d70674a\") " pod="calico-system/csi-node-driver-lgcgd" Mar 4 00:44:33.181147 kubelet[3174]: E0304 00:44:33.181035 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.181147 kubelet[3174]: W0304 00:44:33.181047 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.181147 kubelet[3174]: E0304 00:44:33.181059 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.181147 kubelet[3174]: I0304 00:44:33.181080 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2k6\" (UniqueName: \"kubernetes.io/projected/8b70ced7-e20a-43ff-ab70-0b136d70674a-kube-api-access-cf2k6\") pod \"csi-node-driver-lgcgd\" (UID: \"8b70ced7-e20a-43ff-ab70-0b136d70674a\") " pod="calico-system/csi-node-driver-lgcgd" Mar 4 00:44:33.181322 kubelet[3174]: E0304 00:44:33.181246 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.181322 kubelet[3174]: W0304 00:44:33.181257 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.181322 kubelet[3174]: E0304 00:44:33.181268 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.181432 kubelet[3174]: E0304 00:44:33.181421 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.181432 kubelet[3174]: W0304 00:44:33.181430 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.181503 kubelet[3174]: E0304 00:44:33.181440 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.181653 kubelet[3174]: E0304 00:44:33.181640 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.181723 kubelet[3174]: W0304 00:44:33.181650 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.181723 kubelet[3174]: E0304 00:44:33.181677 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.181723 kubelet[3174]: I0304 00:44:33.181702 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8b70ced7-e20a-43ff-ab70-0b136d70674a-socket-dir\") pod \"csi-node-driver-lgcgd\" (UID: \"8b70ced7-e20a-43ff-ab70-0b136d70674a\") " pod="calico-system/csi-node-driver-lgcgd" Mar 4 00:44:33.181879 kubelet[3174]: E0304 00:44:33.181865 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.181879 kubelet[3174]: W0304 00:44:33.181878 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.181949 kubelet[3174]: E0304 00:44:33.181889 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.181949 kubelet[3174]: I0304 00:44:33.181907 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8b70ced7-e20a-43ff-ab70-0b136d70674a-varrun\") pod \"csi-node-driver-lgcgd\" (UID: \"8b70ced7-e20a-43ff-ab70-0b136d70674a\") " pod="calico-system/csi-node-driver-lgcgd" Mar 4 00:44:33.182093 kubelet[3174]: E0304 00:44:33.182078 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.182093 kubelet[3174]: W0304 00:44:33.182091 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.182208 kubelet[3174]: E0304 00:44:33.182101 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.182208 kubelet[3174]: I0304 00:44:33.182160 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8b70ced7-e20a-43ff-ab70-0b136d70674a-registration-dir\") pod \"csi-node-driver-lgcgd\" (UID: \"8b70ced7-e20a-43ff-ab70-0b136d70674a\") " pod="calico-system/csi-node-driver-lgcgd" Mar 4 00:44:33.182531 kubelet[3174]: E0304 00:44:33.182514 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.182531 kubelet[3174]: W0304 00:44:33.182530 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.182609 kubelet[3174]: E0304 00:44:33.182542 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.182718 kubelet[3174]: E0304 00:44:33.182706 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.182748 kubelet[3174]: W0304 00:44:33.182718 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.182748 kubelet[3174]: E0304 00:44:33.182728 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.182864 kubelet[3174]: E0304 00:44:33.182854 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.182864 kubelet[3174]: W0304 00:44:33.182863 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.182935 kubelet[3174]: E0304 00:44:33.182871 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.183018 kubelet[3174]: E0304 00:44:33.183006 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.183018 kubelet[3174]: W0304 00:44:33.183016 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.183096 kubelet[3174]: E0304 00:44:33.183023 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183195 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.184900 kubelet[3174]: W0304 00:44:33.183204 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183212 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183383 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.184900 kubelet[3174]: W0304 00:44:33.183391 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183398 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183548 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.184900 kubelet[3174]: W0304 00:44:33.183555 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183563 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.184900 kubelet[3174]: E0304 00:44:33.183698 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.185155 kubelet[3174]: W0304 00:44:33.183704 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.185155 kubelet[3174]: E0304 00:44:33.183712 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.209434 containerd[1721]: time="2026-03-04T00:44:33.209335547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:33.209434 containerd[1721]: time="2026-03-04T00:44:33.209388787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:33.209597 containerd[1721]: time="2026-03-04T00:44:33.209413627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:33.209597 containerd[1721]: time="2026-03-04T00:44:33.209502907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:33.225282 systemd[1]: Started cri-containerd-a9c2a6cebe976c2e0ca8c8fae52a3ada21dd0221971d16d6ddd52b8400cf3cb2.scope - libcontainer container a9c2a6cebe976c2e0ca8c8fae52a3ada21dd0221971d16d6ddd52b8400cf3cb2. Mar 4 00:44:33.252682 containerd[1721]: time="2026-03-04T00:44:33.252627857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b7d885bd-m65q7,Uid:f9384b9d-2234-4deb-b7df-fe914ac8391d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9c2a6cebe976c2e0ca8c8fae52a3ada21dd0221971d16d6ddd52b8400cf3cb2\"" Mar 4 00:44:33.254837 containerd[1721]: time="2026-03-04T00:44:33.254797138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 00:44:33.282719 kubelet[3174]: E0304 00:44:33.282692 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.283016 kubelet[3174]: W0304 00:44:33.282858 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.283016 kubelet[3174]: E0304 00:44:33.282882 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.283158 kubelet[3174]: E0304 00:44:33.283147 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.283229 kubelet[3174]: W0304 00:44:33.283217 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.283286 kubelet[3174]: E0304 00:44:33.283276 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.283509 kubelet[3174]: E0304 00:44:33.283498 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.283616 kubelet[3174]: W0304 00:44:33.283567 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.283616 kubelet[3174]: E0304 00:44:33.283581 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.283783 kubelet[3174]: E0304 00:44:33.283762 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.283824 kubelet[3174]: W0304 00:44:33.283780 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.283824 kubelet[3174]: E0304 00:44:33.283794 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.284000 kubelet[3174]: E0304 00:44:33.283988 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.284000 kubelet[3174]: W0304 00:44:33.283999 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.284054 kubelet[3174]: E0304 00:44:33.284009 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.284186 kubelet[3174]: E0304 00:44:33.284174 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.284227 kubelet[3174]: W0304 00:44:33.284192 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.284227 kubelet[3174]: E0304 00:44:33.284202 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.284453 kubelet[3174]: E0304 00:44:33.284439 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.284453 kubelet[3174]: W0304 00:44:33.284452 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.284509 kubelet[3174]: E0304 00:44:33.284462 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.284680 kubelet[3174]: E0304 00:44:33.284664 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.284680 kubelet[3174]: W0304 00:44:33.284678 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.284741 kubelet[3174]: E0304 00:44:33.284689 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.284899 kubelet[3174]: E0304 00:44:33.284886 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.284899 kubelet[3174]: W0304 00:44:33.284898 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.284967 kubelet[3174]: E0304 00:44:33.284908 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.285181 kubelet[3174]: E0304 00:44:33.285166 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.285270 kubelet[3174]: W0304 00:44:33.285179 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.285270 kubelet[3174]: E0304 00:44:33.285197 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.285432 kubelet[3174]: E0304 00:44:33.285420 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.285432 kubelet[3174]: W0304 00:44:33.285431 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.285487 kubelet[3174]: E0304 00:44:33.285440 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.285833 kubelet[3174]: E0304 00:44:33.285816 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.285833 kubelet[3174]: W0304 00:44:33.285831 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.285896 kubelet[3174]: E0304 00:44:33.285842 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.286291 containerd[1721]: time="2026-03-04T00:44:33.286251280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bcdf,Uid:5b9debdd-b7ab-41bf-bb7d-7d5f05238df4,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:33.286772 kubelet[3174]: E0304 00:44:33.286448 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.286772 kubelet[3174]: W0304 00:44:33.286465 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.286772 kubelet[3174]: E0304 00:44:33.286480 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.287176 kubelet[3174]: E0304 00:44:33.287052 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.287372 kubelet[3174]: W0304 00:44:33.287259 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.287372 kubelet[3174]: E0304 00:44:33.287278 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.287698 kubelet[3174]: E0304 00:44:33.287681 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.287785 kubelet[3174]: W0304 00:44:33.287772 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.287840 kubelet[3174]: E0304 00:44:33.287830 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.288501 kubelet[3174]: E0304 00:44:33.288484 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.288596 kubelet[3174]: W0304 00:44:33.288584 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.288750 kubelet[3174]: E0304 00:44:33.288647 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.289146 kubelet[3174]: E0304 00:44:33.289000 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.289146 kubelet[3174]: W0304 00:44:33.289014 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.289146 kubelet[3174]: E0304 00:44:33.289026 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.289312 kubelet[3174]: E0304 00:44:33.289300 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.289648 kubelet[3174]: W0304 00:44:33.289625 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.289734 kubelet[3174]: E0304 00:44:33.289723 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.290261 kubelet[3174]: E0304 00:44:33.290228 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.290936 kubelet[3174]: W0304 00:44:33.290780 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.290936 kubelet[3174]: E0304 00:44:33.290806 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.291279 kubelet[3174]: E0304 00:44:33.291177 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.291279 kubelet[3174]: W0304 00:44:33.291189 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.291279 kubelet[3174]: E0304 00:44:33.291200 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.291556 kubelet[3174]: E0304 00:44:33.291453 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.291556 kubelet[3174]: W0304 00:44:33.291464 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.291556 kubelet[3174]: E0304 00:44:33.291477 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.291826 kubelet[3174]: E0304 00:44:33.291721 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.291826 kubelet[3174]: W0304 00:44:33.291737 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.291826 kubelet[3174]: E0304 00:44:33.291747 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.292099 kubelet[3174]: E0304 00:44:33.292086 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.292301 kubelet[3174]: W0304 00:44:33.292189 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.292301 kubelet[3174]: E0304 00:44:33.292224 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.292991 kubelet[3174]: E0304 00:44:33.292977 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.293064 kubelet[3174]: W0304 00:44:33.293053 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.293149 kubelet[3174]: E0304 00:44:33.293139 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.294757 kubelet[3174]: E0304 00:44:33.293494 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.294757 kubelet[3174]: W0304 00:44:33.293504 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.294757 kubelet[3174]: E0304 00:44:33.293515 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.299947 kubelet[3174]: E0304 00:44:33.299926 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:33.299947 kubelet[3174]: W0304 00:44:33.299942 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:33.300037 kubelet[3174]: E0304 00:44:33.299956 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:33.333947 containerd[1721]: time="2026-03-04T00:44:33.333699473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:33.333947 containerd[1721]: time="2026-03-04T00:44:33.333813833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:33.333947 containerd[1721]: time="2026-03-04T00:44:33.333831953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:33.334259 containerd[1721]: time="2026-03-04T00:44:33.333965593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:33.349337 systemd[1]: Started cri-containerd-2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2.scope - libcontainer container 2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2. Mar 4 00:44:33.369625 containerd[1721]: time="2026-03-04T00:44:33.369526378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bcdf,Uid:5b9debdd-b7ab-41bf-bb7d-7d5f05238df4,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\"" Mar 4 00:44:34.633625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185586805.mount: Deactivated successfully. Mar 4 00:44:35.054659 kubelet[3174]: E0304 00:44:35.054618 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:35.435837 containerd[1721]: time="2026-03-04T00:44:35.435786128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:35.438448 containerd[1721]: time="2026-03-04T00:44:35.438405050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 4 00:44:35.441963 containerd[1721]: time="2026-03-04T00:44:35.441609492Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:35.446719 containerd[1721]: time="2026-03-04T00:44:35.446678135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:35.447697 containerd[1721]: time="2026-03-04T00:44:35.447483176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.192541437s" Mar 4 00:44:35.447697 containerd[1721]: time="2026-03-04T00:44:35.447520056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 4 00:44:35.448872 containerd[1721]: time="2026-03-04T00:44:35.448693257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 00:44:35.470212 containerd[1721]: time="2026-03-04T00:44:35.470054232Z" level=info msg="CreateContainer within sandbox \"a9c2a6cebe976c2e0ca8c8fae52a3ada21dd0221971d16d6ddd52b8400cf3cb2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 00:44:35.504647 containerd[1721]: time="2026-03-04T00:44:35.504516175Z" level=info msg="CreateContainer within sandbox \"a9c2a6cebe976c2e0ca8c8fae52a3ada21dd0221971d16d6ddd52b8400cf3cb2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5919ae75b66cbf002ba130f642ac6441dcc58116f4a70aaa1469f5962528d45\"" Mar 4 00:44:35.505416 containerd[1721]: time="2026-03-04T00:44:35.505383296Z" level=info msg="StartContainer for \"c5919ae75b66cbf002ba130f642ac6441dcc58116f4a70aaa1469f5962528d45\"" Mar 4 00:44:35.543493 systemd[1]: Started cri-containerd-c5919ae75b66cbf002ba130f642ac6441dcc58116f4a70aaa1469f5962528d45.scope - libcontainer container c5919ae75b66cbf002ba130f642ac6441dcc58116f4a70aaa1469f5962528d45. Mar 4 00:44:35.601467 containerd[1721]: time="2026-03-04T00:44:35.601426042Z" level=info msg="StartContainer for \"c5919ae75b66cbf002ba130f642ac6441dcc58116f4a70aaa1469f5962528d45\" returns successfully" Mar 4 00:44:36.157097 kubelet[3174]: I0304 00:44:36.156400 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59b7d885bd-m65q7" podStartSLOduration=1.9616801480000001 podStartE2EDuration="4.156386467s" podCreationTimestamp="2026-03-04 00:44:32 +0000 UTC" firstStartedPulling="2026-03-04 00:44:33.253858378 +0000 UTC m=+22.313030997" lastFinishedPulling="2026-03-04 00:44:35.448564697 +0000 UTC m=+24.507737316" observedRunningTime="2026-03-04 00:44:36.155630306 +0000 UTC m=+25.214802925" watchObservedRunningTime="2026-03-04 00:44:36.156386467 +0000 UTC m=+25.215559046" Mar 4 00:44:36.200738 kubelet[3174]: E0304 00:44:36.200702 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.200738 kubelet[3174]: W0304 00:44:36.200726 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.200738 kubelet[3174]: E0304 00:44:36.200746 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.200974 kubelet[3174]: E0304 00:44:36.200911 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.200974 kubelet[3174]: W0304 00:44:36.200919 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.200974 kubelet[3174]: E0304 00:44:36.200957 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201100 kubelet[3174]: E0304 00:44:36.201088 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201100 kubelet[3174]: W0304 00:44:36.201097 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201209 kubelet[3174]: E0304 00:44:36.201123 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201273 kubelet[3174]: E0304 00:44:36.201258 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201273 kubelet[3174]: W0304 00:44:36.201268 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201352 kubelet[3174]: E0304 00:44:36.201276 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201424 kubelet[3174]: E0304 00:44:36.201411 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201424 kubelet[3174]: W0304 00:44:36.201423 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201504 kubelet[3174]: E0304 00:44:36.201432 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201561 kubelet[3174]: E0304 00:44:36.201543 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201561 kubelet[3174]: W0304 00:44:36.201549 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201561 kubelet[3174]: E0304 00:44:36.201556 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201677 kubelet[3174]: E0304 00:44:36.201668 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201677 kubelet[3174]: W0304 00:44:36.201674 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201816 kubelet[3174]: E0304 00:44:36.201681 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201848 kubelet[3174]: E0304 00:44:36.201835 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201848 kubelet[3174]: W0304 00:44:36.201842 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.201848 kubelet[3174]: E0304 00:44:36.201850 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.201995 kubelet[3174]: E0304 00:44:36.201982 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.201995 kubelet[3174]: W0304 00:44:36.201994 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202074 kubelet[3174]: E0304 00:44:36.202003 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202151 kubelet[3174]: E0304 00:44:36.202138 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202151 kubelet[3174]: W0304 00:44:36.202149 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202228 kubelet[3174]: E0304 00:44:36.202158 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202285 kubelet[3174]: E0304 00:44:36.202274 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202285 kubelet[3174]: W0304 00:44:36.202283 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202356 kubelet[3174]: E0304 00:44:36.202291 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202418 kubelet[3174]: E0304 00:44:36.202407 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202418 kubelet[3174]: W0304 00:44:36.202418 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202482 kubelet[3174]: E0304 00:44:36.202426 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202554 kubelet[3174]: E0304 00:44:36.202543 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202554 kubelet[3174]: W0304 00:44:36.202552 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202629 kubelet[3174]: E0304 00:44:36.202560 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202689 kubelet[3174]: E0304 00:44:36.202679 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202689 kubelet[3174]: W0304 00:44:36.202687 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202743 kubelet[3174]: E0304 00:44:36.202694 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.202823 kubelet[3174]: E0304 00:44:36.202812 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.202823 kubelet[3174]: W0304 00:44:36.202821 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.202881 kubelet[3174]: E0304 00:44:36.202828 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.211309 kubelet[3174]: E0304 00:44:36.211234 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.211309 kubelet[3174]: W0304 00:44:36.211252 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.211309 kubelet[3174]: E0304 00:44:36.211266 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.211497 kubelet[3174]: E0304 00:44:36.211483 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.211497 kubelet[3174]: W0304 00:44:36.211494 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.211560 kubelet[3174]: E0304 00:44:36.211504 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.211702 kubelet[3174]: E0304 00:44:36.211688 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.211702 kubelet[3174]: W0304 00:44:36.211700 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.211765 kubelet[3174]: E0304 00:44:36.211709 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.211943 kubelet[3174]: E0304 00:44:36.211930 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.211943 kubelet[3174]: W0304 00:44:36.211940 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212032 kubelet[3174]: E0304 00:44:36.211952 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.212157 kubelet[3174]: E0304 00:44:36.212144 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.212157 kubelet[3174]: W0304 00:44:36.212155 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212234 kubelet[3174]: E0304 00:44:36.212164 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.212326 kubelet[3174]: E0304 00:44:36.212314 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.212359 kubelet[3174]: W0304 00:44:36.212330 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212359 kubelet[3174]: E0304 00:44:36.212339 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.212559 kubelet[3174]: E0304 00:44:36.212545 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.212559 kubelet[3174]: W0304 00:44:36.212559 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212970 kubelet[3174]: E0304 00:44:36.212568 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.212970 kubelet[3174]: E0304 00:44:36.212730 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.212970 kubelet[3174]: W0304 00:44:36.212738 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212970 kubelet[3174]: E0304 00:44:36.212749 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.212970 kubelet[3174]: E0304 00:44:36.212899 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.212970 kubelet[3174]: W0304 00:44:36.212907 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.212970 kubelet[3174]: E0304 00:44:36.212915 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.213183 kubelet[3174]: E0304 00:44:36.213049 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.213183 kubelet[3174]: W0304 00:44:36.213056 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.213183 kubelet[3174]: E0304 00:44:36.213075 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.213249 kubelet[3174]: E0304 00:44:36.213206 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.213249 kubelet[3174]: W0304 00:44:36.213213 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.213249 kubelet[3174]: E0304 00:44:36.213221 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.213371 kubelet[3174]: E0304 00:44:36.213353 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.213371 kubelet[3174]: W0304 00:44:36.213366 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.213433 kubelet[3174]: E0304 00:44:36.213374 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.213743 kubelet[3174]: E0304 00:44:36.213640 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.213743 kubelet[3174]: W0304 00:44:36.213655 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.213743 kubelet[3174]: E0304 00:44:36.213667 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.214050 kubelet[3174]: E0304 00:44:36.213985 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.214050 kubelet[3174]: W0304 00:44:36.213996 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.214050 kubelet[3174]: E0304 00:44:36.214007 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.214403 kubelet[3174]: E0304 00:44:36.214319 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.214403 kubelet[3174]: W0304 00:44:36.214331 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.214403 kubelet[3174]: E0304 00:44:36.214341 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.214770 kubelet[3174]: E0304 00:44:36.214633 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.214770 kubelet[3174]: W0304 00:44:36.214643 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.214770 kubelet[3174]: E0304 00:44:36.214653 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.215025 kubelet[3174]: E0304 00:44:36.214893 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.215025 kubelet[3174]: W0304 00:44:36.214907 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.215025 kubelet[3174]: E0304 00:44:36.214917 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:36.215189 kubelet[3174]: E0304 00:44:36.215177 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:36.215254 kubelet[3174]: W0304 00:44:36.215243 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:36.215316 kubelet[3174]: E0304 00:44:36.215306 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.054709 kubelet[3174]: E0304 00:44:37.054360 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:37.139612 kubelet[3174]: I0304 00:44:37.139585 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:44:37.143022 containerd[1721]: time="2026-03-04T00:44:37.142986389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:37.148454 containerd[1721]: time="2026-03-04T00:44:37.148413273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 4 00:44:37.152091 containerd[1721]: time="2026-03-04T00:44:37.151831996Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:37.156472 containerd[1721]: time="2026-03-04T00:44:37.156436239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:37.157262 containerd[1721]: time="2026-03-04T00:44:37.157235639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.708509382s" Mar 4 00:44:37.157412 containerd[1721]: time="2026-03-04T00:44:37.157348279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 4 00:44:37.164993 containerd[1721]: time="2026-03-04T00:44:37.164885405Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 00:44:37.200315 containerd[1721]: time="2026-03-04T00:44:37.200269709Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2\"" Mar 4 00:44:37.201149 containerd[1721]: time="2026-03-04T00:44:37.200996550Z" level=info msg="StartContainer for \"3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2\"" Mar 4 00:44:37.209301 kubelet[3174]: E0304 00:44:37.209274 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.210102 kubelet[3174]: W0304 00:44:37.209673 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.210102 kubelet[3174]: E0304 00:44:37.209703 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.210102 kubelet[3174]: E0304 00:44:37.210070 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.210102 kubelet[3174]: W0304 00:44:37.210080 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.210349 kubelet[3174]: E0304 00:44:37.210187 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.211181 kubelet[3174]: E0304 00:44:37.210615 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.211181 kubelet[3174]: W0304 00:44:37.210628 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.211181 kubelet[3174]: E0304 00:44:37.210639 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.213407 kubelet[3174]: E0304 00:44:37.213212 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.213407 kubelet[3174]: W0304 00:44:37.213226 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.213407 kubelet[3174]: E0304 00:44:37.213238 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.213622 kubelet[3174]: E0304 00:44:37.213479 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.213622 kubelet[3174]: W0304 00:44:37.213487 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.213622 kubelet[3174]: E0304 00:44:37.213497 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.213979 kubelet[3174]: E0304 00:44:37.213878 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.213979 kubelet[3174]: W0304 00:44:37.213889 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.213979 kubelet[3174]: E0304 00:44:37.213899 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.214230 kubelet[3174]: E0304 00:44:37.214085 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.214230 kubelet[3174]: W0304 00:44:37.214093 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.214385 kubelet[3174]: E0304 00:44:37.214102 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.215034 kubelet[3174]: E0304 00:44:37.214548 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.215034 kubelet[3174]: W0304 00:44:37.214559 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.215034 kubelet[3174]: E0304 00:44:37.214569 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.215665 kubelet[3174]: E0304 00:44:37.215530 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.215665 kubelet[3174]: W0304 00:44:37.215542 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.215665 kubelet[3174]: E0304 00:44:37.215553 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.215989 kubelet[3174]: E0304 00:44:37.215838 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.215989 kubelet[3174]: W0304 00:44:37.215851 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.215989 kubelet[3174]: E0304 00:44:37.215861 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.216276 kubelet[3174]: E0304 00:44:37.216170 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.216276 kubelet[3174]: W0304 00:44:37.216181 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.216276 kubelet[3174]: E0304 00:44:37.216191 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.216630 kubelet[3174]: E0304 00:44:37.216528 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.216630 kubelet[3174]: W0304 00:44:37.216539 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.216630 kubelet[3174]: E0304 00:44:37.216550 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.216938 kubelet[3174]: E0304 00:44:37.216860 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.216938 kubelet[3174]: W0304 00:44:37.216872 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.216938 kubelet[3174]: E0304 00:44:37.216882 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.217302 kubelet[3174]: E0304 00:44:37.217200 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.217302 kubelet[3174]: W0304 00:44:37.217212 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.217302 kubelet[3174]: E0304 00:44:37.217221 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.217780 kubelet[3174]: E0304 00:44:37.217584 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.217780 kubelet[3174]: W0304 00:44:37.217595 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.217780 kubelet[3174]: E0304 00:44:37.217605 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.217995 kubelet[3174]: E0304 00:44:37.217983 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.218059 kubelet[3174]: W0304 00:44:37.218049 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.218336 kubelet[3174]: E0304 00:44:37.218251 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.218595 kubelet[3174]: E0304 00:44:37.218523 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.218595 kubelet[3174]: W0304 00:44:37.218536 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.218595 kubelet[3174]: E0304 00:44:37.218546 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.218818 kubelet[3174]: E0304 00:44:37.218799 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.218867 kubelet[3174]: W0304 00:44:37.218822 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.218867 kubelet[3174]: E0304 00:44:37.218835 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.219062 kubelet[3174]: E0304 00:44:37.219048 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.219062 kubelet[3174]: W0304 00:44:37.219060 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.219242 kubelet[3174]: E0304 00:44:37.219071 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.219346 kubelet[3174]: E0304 00:44:37.219332 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.219346 kubelet[3174]: W0304 00:44:37.219344 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.219411 kubelet[3174]: E0304 00:44:37.219355 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.219645 kubelet[3174]: E0304 00:44:37.219632 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.219645 kubelet[3174]: W0304 00:44:37.219644 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.219711 kubelet[3174]: E0304 00:44:37.219654 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.220023 kubelet[3174]: E0304 00:44:37.219903 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.220023 kubelet[3174]: W0304 00:44:37.219915 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.220023 kubelet[3174]: E0304 00:44:37.219925 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.220428 kubelet[3174]: E0304 00:44:37.220319 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.220428 kubelet[3174]: W0304 00:44:37.220332 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.220428 kubelet[3174]: E0304 00:44:37.220346 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.220643 kubelet[3174]: E0304 00:44:37.220629 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.220749 kubelet[3174]: W0304 00:44:37.220688 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.220749 kubelet[3174]: E0304 00:44:37.220704 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.221151 kubelet[3174]: E0304 00:44:37.221018 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.221151 kubelet[3174]: W0304 00:44:37.221031 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.221151 kubelet[3174]: E0304 00:44:37.221043 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.221683 kubelet[3174]: E0304 00:44:37.221608 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.221683 kubelet[3174]: W0304 00:44:37.221623 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.221683 kubelet[3174]: E0304 00:44:37.221634 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.222863 kubelet[3174]: E0304 00:44:37.222688 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.222863 kubelet[3174]: W0304 00:44:37.222703 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.222863 kubelet[3174]: E0304 00:44:37.222719 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.223402 kubelet[3174]: E0304 00:44:37.223208 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.223402 kubelet[3174]: W0304 00:44:37.223222 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.223402 kubelet[3174]: E0304 00:44:37.223236 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.223740 kubelet[3174]: E0304 00:44:37.223567 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.223740 kubelet[3174]: W0304 00:44:37.223579 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.223740 kubelet[3174]: E0304 00:44:37.223590 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.223953 kubelet[3174]: E0304 00:44:37.223940 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.224016 kubelet[3174]: W0304 00:44:37.224005 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.224077 kubelet[3174]: E0304 00:44:37.224066 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.224884 kubelet[3174]: E0304 00:44:37.224771 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.224884 kubelet[3174]: W0304 00:44:37.224783 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.224884 kubelet[3174]: E0304 00:44:37.224794 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.225092 kubelet[3174]: E0304 00:44:37.225081 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.225441 kubelet[3174]: W0304 00:44:37.225183 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.225441 kubelet[3174]: E0304 00:44:37.225200 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.225595 kubelet[3174]: E0304 00:44:37.225582 3174 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 00:44:37.225669 kubelet[3174]: W0304 00:44:37.225635 3174 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 00:44:37.225669 kubelet[3174]: E0304 00:44:37.225649 3174 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 00:44:37.236258 systemd[1]: Started cri-containerd-3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2.scope - libcontainer container 3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2. Mar 4 00:44:37.266700 containerd[1721]: time="2026-03-04T00:44:37.266654435Z" level=info msg="StartContainer for \"3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2\" returns successfully" Mar 4 00:44:37.276049 systemd[1]: cri-containerd-3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2.scope: Deactivated successfully. Mar 4 00:44:37.298755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2-rootfs.mount: Deactivated successfully. Mar 4 00:44:38.402908 containerd[1721]: time="2026-03-04T00:44:38.402843061Z" level=info msg="shim disconnected" id=3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2 namespace=k8s.io Mar 4 00:44:38.402908 containerd[1721]: time="2026-03-04T00:44:38.402899341Z" level=warning msg="cleaning up after shim disconnected" id=3c527440984825e3dddf69c658cf11c675ae1d0fbcdc51c5c57954dc3f3b67a2 namespace=k8s.io Mar 4 00:44:38.402908 containerd[1721]: time="2026-03-04T00:44:38.402908261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:44:39.055227 kubelet[3174]: E0304 00:44:39.054712 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:39.147615 containerd[1721]: time="2026-03-04T00:44:39.147366817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 00:44:41.055530 kubelet[3174]: E0304 00:44:41.055463 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:43.055765 kubelet[3174]: E0304 00:44:43.054782 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:45.054481 kubelet[3174]: E0304 00:44:45.054431 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:45.956679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660548683.mount: Deactivated successfully. Mar 4 00:44:46.068640 containerd[1721]: time="2026-03-04T00:44:46.068211146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:46.072248 containerd[1721]: time="2026-03-04T00:44:46.072211269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 4 00:44:46.076792 containerd[1721]: time="2026-03-04T00:44:46.076740033Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:46.081369 containerd[1721]: time="2026-03-04T00:44:46.081318437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:46.082143 containerd[1721]: time="2026-03-04T00:44:46.081910678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.934505101s" Mar 4 00:44:46.082143 containerd[1721]: time="2026-03-04T00:44:46.081946038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 4 00:44:46.090139 containerd[1721]: time="2026-03-04T00:44:46.089995965Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 00:44:46.127445 containerd[1721]: time="2026-03-04T00:44:46.127402758Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b\"" Mar 4 00:44:46.128566 containerd[1721]: time="2026-03-04T00:44:46.128356519Z" level=info msg="StartContainer for \"8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b\"" Mar 4 00:44:46.157275 systemd[1]: Started cri-containerd-8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b.scope - libcontainer container 8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b. Mar 4 00:44:46.187126 containerd[1721]: time="2026-03-04T00:44:46.186976371Z" level=info msg="StartContainer for \"8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b\" returns successfully" Mar 4 00:44:46.222692 systemd[1]: cri-containerd-8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b.scope: Deactivated successfully. Mar 4 00:44:46.956637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b-rootfs.mount: Deactivated successfully. Mar 4 00:44:47.090322 kubelet[3174]: E0304 00:44:47.055414 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:47.846700 containerd[1721]: time="2026-03-04T00:44:47.846547603Z" level=info msg="shim disconnected" id=8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b namespace=k8s.io Mar 4 00:44:47.846700 containerd[1721]: time="2026-03-04T00:44:47.846609883Z" level=warning msg="cleaning up after shim disconnected" id=8eabb79aeeb14cd25bf6eb196c9a8649b7b8d9eedf8ccd2b0d0d47ef1463ae8b namespace=k8s.io Mar 4 00:44:47.846700 containerd[1721]: time="2026-03-04T00:44:47.846618643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:44:48.169447 containerd[1721]: time="2026-03-04T00:44:48.169356050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 00:44:49.055749 kubelet[3174]: E0304 00:44:49.054740 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:51.055044 kubelet[3174]: E0304 00:44:51.054456 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:51.342270 containerd[1721]: time="2026-03-04T00:44:51.342003580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:51.344939 containerd[1721]: time="2026-03-04T00:44:51.344901316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 4 00:44:51.348272 containerd[1721]: time="2026-03-04T00:44:51.348215015Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:51.352212 containerd[1721]: time="2026-03-04T00:44:51.352172837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:51.353320 containerd[1721]: time="2026-03-04T00:44:51.353203683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.183809793s" Mar 4 00:44:51.353320 containerd[1721]: time="2026-03-04T00:44:51.353240803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 4 00:44:51.361118 containerd[1721]: time="2026-03-04T00:44:51.361066767Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 00:44:51.386441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729948808.mount: Deactivated successfully. Mar 4 00:44:51.398873 containerd[1721]: time="2026-03-04T00:44:51.398825500Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95\"" Mar 4 00:44:51.399754 containerd[1721]: time="2026-03-04T00:44:51.399713505Z" level=info msg="StartContainer for \"9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95\"" Mar 4 00:44:51.427635 systemd[1]: run-containerd-runc-k8s.io-9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95-runc.gRcXmt.mount: Deactivated successfully. Mar 4 00:44:51.436331 systemd[1]: Started cri-containerd-9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95.scope - libcontainer container 9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95. Mar 4 00:44:51.467232 containerd[1721]: time="2026-03-04T00:44:51.467183365Z" level=info msg="StartContainer for \"9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95\" returns successfully" Mar 4 00:44:53.054765 kubelet[3174]: E0304 00:44:53.054312 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcgd" podUID="8b70ced7-e20a-43ff-ab70-0b136d70674a" Mar 4 00:44:53.679621 systemd[1]: cri-containerd-9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95.scope: Deactivated successfully. Mar 4 00:44:53.706024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95-rootfs.mount: Deactivated successfully. Mar 4 00:44:53.713542 containerd[1721]: time="2026-03-04T00:44:53.713484601Z" level=info msg="shim disconnected" id=9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95 namespace=k8s.io Mar 4 00:44:53.713542 containerd[1721]: time="2026-03-04T00:44:53.713536601Z" level=warning msg="cleaning up after shim disconnected" id=9d8926496596a1fc9a25fb590a38aff7a4e881af5ede2bf5cd2f3ce68f3a6d95 namespace=k8s.io Mar 4 00:44:53.713542 containerd[1721]: time="2026-03-04T00:44:53.713544881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 00:44:53.726143 containerd[1721]: time="2026-03-04T00:44:53.724986730Z" level=warning msg="cleanup warnings time=\"2026-03-04T00:44:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 00:44:53.728687 kubelet[3174]: I0304 00:44:53.728662 3174 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 00:44:53.783373 systemd[1]: Created slice kubepods-burstable-poda012406e_5d5d_4be6_a07e_5f6094f9e248.slice - libcontainer container kubepods-burstable-poda012406e_5d5d_4be6_a07e_5f6094f9e248.slice. Mar 4 00:44:53.805479 systemd[1]: Created slice kubepods-besteffort-pod1026cf72_bf90_4cde_98d0_93ad03abe950.slice - libcontainer container kubepods-besteffort-pod1026cf72_bf90_4cde_98d0_93ad03abe950.slice. Mar 4 00:44:53.814683 kubelet[3174]: I0304 00:44:53.814655 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twkks\" (UniqueName: \"kubernetes.io/projected/a012406e-5d5d-4be6-a07e-5f6094f9e248-kube-api-access-twkks\") pod \"coredns-674b8bbfcf-5s27l\" (UID: \"a012406e-5d5d-4be6-a07e-5f6094f9e248\") " pod="kube-system/coredns-674b8bbfcf-5s27l" Mar 4 00:44:53.816355 kubelet[3174]: I0304 00:44:53.816264 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a012406e-5d5d-4be6-a07e-5f6094f9e248-config-volume\") pod \"coredns-674b8bbfcf-5s27l\" (UID: \"a012406e-5d5d-4be6-a07e-5f6094f9e248\") " pod="kube-system/coredns-674b8bbfcf-5s27l" Mar 4 00:44:53.817093 kubelet[3174]: I0304 00:44:53.816318 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstpf\" (UniqueName: \"kubernetes.io/projected/619b7aec-3809-4bd9-91c1-1f701fc98910-kube-api-access-nstpf\") pod \"calico-apiserver-6c457fd865-bbvcs\" (UID: \"619b7aec-3809-4bd9-91c1-1f701fc98910\") " pod="calico-system/calico-apiserver-6c457fd865-bbvcs" Mar 4 00:44:53.818499 kubelet[3174]: I0304 00:44:53.818449 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/892fe9de-db99-487d-bc78-cf3eb17d4d4b-config-volume\") pod \"coredns-674b8bbfcf-rjwpl\" (UID: \"892fe9de-db99-487d-bc78-cf3eb17d4d4b\") " pod="kube-system/coredns-674b8bbfcf-rjwpl" Mar 4 00:44:53.818854 kubelet[3174]: I0304 00:44:53.818622 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1026cf72-bf90-4cde-98d0-93ad03abe950-tigera-ca-bundle\") pod \"calico-kube-controllers-54988d8f9b-q276v\" (UID: \"1026cf72-bf90-4cde-98d0-93ad03abe950\") " pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" Mar 4 00:44:53.819057 kubelet[3174]: I0304 00:44:53.818931 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5pt\" (UniqueName: \"kubernetes.io/projected/892fe9de-db99-487d-bc78-cf3eb17d4d4b-kube-api-access-pg5pt\") pod \"coredns-674b8bbfcf-rjwpl\" (UID: \"892fe9de-db99-487d-bc78-cf3eb17d4d4b\") " pod="kube-system/coredns-674b8bbfcf-rjwpl" Mar 4 00:44:53.819057 kubelet[3174]: I0304 00:44:53.818964 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtrb\" (UniqueName: \"kubernetes.io/projected/1026cf72-bf90-4cde-98d0-93ad03abe950-kube-api-access-bhtrb\") pod \"calico-kube-controllers-54988d8f9b-q276v\" (UID: \"1026cf72-bf90-4cde-98d0-93ad03abe950\") " pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" Mar 4 00:44:53.819370 systemd[1]: Created slice kubepods-burstable-pod892fe9de_db99_487d_bc78_cf3eb17d4d4b.slice - libcontainer container kubepods-burstable-pod892fe9de_db99_487d_bc78_cf3eb17d4d4b.slice. Mar 4 00:44:53.820005 kubelet[3174]: I0304 00:44:53.818982 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/619b7aec-3809-4bd9-91c1-1f701fc98910-calico-apiserver-certs\") pod \"calico-apiserver-6c457fd865-bbvcs\" (UID: \"619b7aec-3809-4bd9-91c1-1f701fc98910\") " pod="calico-system/calico-apiserver-6c457fd865-bbvcs" Mar 4 00:44:53.830329 systemd[1]: Created slice kubepods-besteffort-pod71f59a49_90d2_4128_9e10_49da32cff0d5.slice - libcontainer container kubepods-besteffort-pod71f59a49_90d2_4128_9e10_49da32cff0d5.slice. Mar 4 00:44:53.841786 systemd[1]: Created slice kubepods-besteffort-pod0633db9d_7cfc_4660_aa53_674582de4b80.slice - libcontainer container kubepods-besteffort-pod0633db9d_7cfc_4660_aa53_674582de4b80.slice. Mar 4 00:44:53.849756 systemd[1]: Created slice kubepods-besteffort-poda41f2112_b709_42a8_b7ef_8a376aba0a93.slice - libcontainer container kubepods-besteffort-poda41f2112_b709_42a8_b7ef_8a376aba0a93.slice. Mar 4 00:44:53.854023 systemd[1]: Created slice kubepods-besteffort-pod619b7aec_3809_4bd9_91c1_1f701fc98910.slice - libcontainer container kubepods-besteffort-pod619b7aec_3809_4bd9_91c1_1f701fc98910.slice. Mar 4 00:44:53.920930 kubelet[3174]: I0304 00:44:53.920723 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41f2112-b709-42a8-b7ef-8a376aba0a93-config\") pod \"goldmane-5b85766d88-mnq4v\" (UID: \"a41f2112-b709-42a8-b7ef-8a376aba0a93\") " pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:53.921180 kubelet[3174]: I0304 00:44:53.920941 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-ca-bundle\") pod \"whisker-bfbbd99b4-flhbl\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:53.921352 kubelet[3174]: I0304 00:44:53.921207 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jf2t\" (UniqueName: \"kubernetes.io/projected/71f59a49-90d2-4128-9e10-49da32cff0d5-kube-api-access-2jf2t\") pod \"whisker-bfbbd99b4-flhbl\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:53.921507 kubelet[3174]: I0304 00:44:53.921370 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0633db9d-7cfc-4660-aa53-674582de4b80-calico-apiserver-certs\") pod \"calico-apiserver-6c457fd865-j6sg7\" (UID: \"0633db9d-7cfc-4660-aa53-674582de4b80\") " pod="calico-system/calico-apiserver-6c457fd865-j6sg7" Mar 4 00:44:53.921697 kubelet[3174]: I0304 00:44:53.921645 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a41f2112-b709-42a8-b7ef-8a376aba0a93-goldmane-key-pair\") pod \"goldmane-5b85766d88-mnq4v\" (UID: \"a41f2112-b709-42a8-b7ef-8a376aba0a93\") " pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:53.921794 kubelet[3174]: I0304 00:44:53.921711 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-nginx-config\") pod \"whisker-bfbbd99b4-flhbl\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:53.921940 kubelet[3174]: I0304 00:44:53.921809 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-backend-key-pair\") pod \"whisker-bfbbd99b4-flhbl\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:53.922750 kubelet[3174]: I0304 00:44:53.922728 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a41f2112-b709-42a8-b7ef-8a376aba0a93-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-mnq4v\" (UID: \"a41f2112-b709-42a8-b7ef-8a376aba0a93\") " pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:53.923056 kubelet[3174]: I0304 00:44:53.922968 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bmf\" (UniqueName: \"kubernetes.io/projected/a41f2112-b709-42a8-b7ef-8a376aba0a93-kube-api-access-d4bmf\") pod \"goldmane-5b85766d88-mnq4v\" (UID: \"a41f2112-b709-42a8-b7ef-8a376aba0a93\") " pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:53.923259 kubelet[3174]: I0304 00:44:53.923245 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4dm\" (UniqueName: \"kubernetes.io/projected/0633db9d-7cfc-4660-aa53-674582de4b80-kube-api-access-rx4dm\") pod \"calico-apiserver-6c457fd865-j6sg7\" (UID: \"0633db9d-7cfc-4660-aa53-674582de4b80\") " pod="calico-system/calico-apiserver-6c457fd865-j6sg7" Mar 4 00:44:54.092202 containerd[1721]: time="2026-03-04T00:44:54.091902713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5s27l,Uid:a012406e-5d5d-4be6-a07e-5f6094f9e248,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:54.115027 containerd[1721]: time="2026-03-04T00:44:54.114690970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54988d8f9b-q276v,Uid:1026cf72-bf90-4cde-98d0-93ad03abe950,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:54.127156 containerd[1721]: time="2026-03-04T00:44:54.127119499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjwpl,Uid:892fe9de-db99-487d-bc78-cf3eb17d4d4b,Namespace:kube-system,Attempt:0,}" Mar 4 00:44:54.136123 containerd[1721]: time="2026-03-04T00:44:54.136071585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bfbbd99b4-flhbl,Uid:71f59a49-90d2-4128-9e10-49da32cff0d5,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:54.147580 containerd[1721]: time="2026-03-04T00:44:54.147537913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-j6sg7,Uid:0633db9d-7cfc-4660-aa53-674582de4b80,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:54.154531 containerd[1721]: time="2026-03-04T00:44:54.154493318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mnq4v,Uid:a41f2112-b709-42a8-b7ef-8a376aba0a93,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:54.156371 containerd[1721]: time="2026-03-04T00:44:54.156337600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-bbvcs,Uid:619b7aec-3809-4bd9-91c1-1f701fc98910,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:54.202050 containerd[1721]: time="2026-03-04T00:44:54.201962072Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 00:44:54.219593 containerd[1721]: time="2026-03-04T00:44:54.219546445Z" level=error msg="Failed to destroy network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.219914 containerd[1721]: time="2026-03-04T00:44:54.219887845Z" level=error msg="encountered an error cleaning up failed sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.219952 containerd[1721]: time="2026-03-04T00:44:54.219938445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5s27l,Uid:a012406e-5d5d-4be6-a07e-5f6094f9e248,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.220516 kubelet[3174]: E0304 00:44:54.220142 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.220516 kubelet[3174]: E0304 00:44:54.220202 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5s27l" Mar 4 00:44:54.220516 kubelet[3174]: E0304 00:44:54.220220 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5s27l" Mar 4 00:44:54.221015 kubelet[3174]: E0304 00:44:54.220276 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5s27l_kube-system(a012406e-5d5d-4be6-a07e-5f6094f9e248)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5s27l_kube-system(a012406e-5d5d-4be6-a07e-5f6094f9e248)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5s27l" podUID="a012406e-5d5d-4be6-a07e-5f6094f9e248" Mar 4 00:44:54.281098 containerd[1721]: time="2026-03-04T00:44:54.281049449Z" level=error msg="Failed to destroy network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.281381 containerd[1721]: time="2026-03-04T00:44:54.281354929Z" level=error msg="encountered an error cleaning up failed sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.281453 containerd[1721]: time="2026-03-04T00:44:54.281418889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54988d8f9b-q276v,Uid:1026cf72-bf90-4cde-98d0-93ad03abe950,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.281827 kubelet[3174]: E0304 00:44:54.281664 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.281827 kubelet[3174]: E0304 00:44:54.281751 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" Mar 4 00:44:54.281827 kubelet[3174]: E0304 00:44:54.281773 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" Mar 4 00:44:54.282098 kubelet[3174]: E0304 00:44:54.281974 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54988d8f9b-q276v_calico-system(1026cf72-bf90-4cde-98d0-93ad03abe950)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54988d8f9b-q276v_calico-system(1026cf72-bf90-4cde-98d0-93ad03abe950)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" podUID="1026cf72-bf90-4cde-98d0-93ad03abe950" Mar 4 00:44:54.355488 containerd[1721]: time="2026-03-04T00:44:54.355222462Z" level=info msg="CreateContainer within sandbox \"2b4a621f34b4698d3034206d5ca2e86e3b01ff1e6b4b7425e9171b4e0c9c23a2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471\"" Mar 4 00:44:54.358396 containerd[1721]: time="2026-03-04T00:44:54.357652504Z" level=info msg="StartContainer for \"acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471\"" Mar 4 00:44:54.364806 containerd[1721]: time="2026-03-04T00:44:54.364764869Z" level=error msg="Failed to destroy network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.365092 containerd[1721]: time="2026-03-04T00:44:54.365062310Z" level=error msg="encountered an error cleaning up failed sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.365156 containerd[1721]: time="2026-03-04T00:44:54.365127590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjwpl,Uid:892fe9de-db99-487d-bc78-cf3eb17d4d4b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.365492 kubelet[3174]: E0304 00:44:54.365309 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.365492 kubelet[3174]: E0304 00:44:54.365366 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rjwpl" Mar 4 00:44:54.365492 kubelet[3174]: E0304 00:44:54.365385 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rjwpl" Mar 4 00:44:54.365647 kubelet[3174]: E0304 00:44:54.365428 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rjwpl_kube-system(892fe9de-db99-487d-bc78-cf3eb17d4d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rjwpl_kube-system(892fe9de-db99-487d-bc78-cf3eb17d4d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rjwpl" podUID="892fe9de-db99-487d-bc78-cf3eb17d4d4b" Mar 4 00:44:54.424307 systemd[1]: Started cri-containerd-acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471.scope - libcontainer container acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471. Mar 4 00:44:54.437844 containerd[1721]: time="2026-03-04T00:44:54.437792162Z" level=error msg="Failed to destroy network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.439618 containerd[1721]: time="2026-03-04T00:44:54.439566083Z" level=error msg="encountered an error cleaning up failed sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.440441 containerd[1721]: time="2026-03-04T00:44:54.440301124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mnq4v,Uid:a41f2112-b709-42a8-b7ef-8a376aba0a93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.441335 kubelet[3174]: E0304 00:44:54.441066 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.441335 kubelet[3174]: E0304 00:44:54.441158 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:54.441335 kubelet[3174]: E0304 00:44:54.441197 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mnq4v" Mar 4 00:44:54.441574 kubelet[3174]: E0304 00:44:54.441261 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-mnq4v_calico-system(a41f2112-b709-42a8-b7ef-8a376aba0a93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-mnq4v_calico-system(a41f2112-b709-42a8-b7ef-8a376aba0a93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-mnq4v" podUID="a41f2112-b709-42a8-b7ef-8a376aba0a93" Mar 4 00:44:54.465436 containerd[1721]: time="2026-03-04T00:44:54.465388382Z" level=error msg="Failed to destroy network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.465739 containerd[1721]: time="2026-03-04T00:44:54.465701102Z" level=error msg="encountered an error cleaning up failed sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.465777 containerd[1721]: time="2026-03-04T00:44:54.465749862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bfbbd99b4-flhbl,Uid:71f59a49-90d2-4128-9e10-49da32cff0d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.466544 kubelet[3174]: E0304 00:44:54.465944 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.466544 kubelet[3174]: E0304 00:44:54.466008 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:54.466544 kubelet[3174]: E0304 00:44:54.466028 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bfbbd99b4-flhbl" Mar 4 00:44:54.466773 kubelet[3174]: E0304 00:44:54.466071 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bfbbd99b4-flhbl_calico-system(71f59a49-90d2-4128-9e10-49da32cff0d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bfbbd99b4-flhbl_calico-system(71f59a49-90d2-4128-9e10-49da32cff0d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bfbbd99b4-flhbl" podUID="71f59a49-90d2-4128-9e10-49da32cff0d5" Mar 4 00:44:54.477846 containerd[1721]: time="2026-03-04T00:44:54.477802231Z" level=error msg="Failed to destroy network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.480226 containerd[1721]: time="2026-03-04T00:44:54.480077272Z" level=error msg="encountered an error cleaning up failed sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.480226 containerd[1721]: time="2026-03-04T00:44:54.480187952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-bbvcs,Uid:619b7aec-3809-4bd9-91c1-1f701fc98910,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.482625 kubelet[3174]: E0304 00:44:54.480377 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.482625 kubelet[3174]: E0304 00:44:54.480440 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c457fd865-bbvcs" Mar 4 00:44:54.482625 kubelet[3174]: E0304 00:44:54.480461 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c457fd865-bbvcs" Mar 4 00:44:54.482808 kubelet[3174]: E0304 00:44:54.480505 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c457fd865-bbvcs_calico-system(619b7aec-3809-4bd9-91c1-1f701fc98910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c457fd865-bbvcs_calico-system(619b7aec-3809-4bd9-91c1-1f701fc98910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c457fd865-bbvcs" podUID="619b7aec-3809-4bd9-91c1-1f701fc98910" Mar 4 00:44:54.483153 containerd[1721]: time="2026-03-04T00:44:54.483040434Z" level=error msg="Failed to destroy network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.483655 containerd[1721]: time="2026-03-04T00:44:54.483458755Z" level=info msg="StartContainer for \"acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471\" returns successfully" Mar 4 00:44:54.483655 containerd[1721]: time="2026-03-04T00:44:54.483524355Z" level=error msg="encountered an error cleaning up failed sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.483655 containerd[1721]: time="2026-03-04T00:44:54.483573315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-j6sg7,Uid:0633db9d-7cfc-4660-aa53-674582de4b80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.483816 kubelet[3174]: E0304 00:44:54.483781 3174 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 00:44:54.483870 kubelet[3174]: E0304 00:44:54.483837 3174 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c457fd865-j6sg7" Mar 4 00:44:54.483899 kubelet[3174]: E0304 00:44:54.483878 3174 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c457fd865-j6sg7" Mar 4 00:44:54.483946 kubelet[3174]: E0304 00:44:54.483923 3174 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c457fd865-j6sg7_calico-system(0633db9d-7cfc-4660-aa53-674582de4b80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c457fd865-j6sg7_calico-system(0633db9d-7cfc-4660-aa53-674582de4b80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c457fd865-j6sg7" podUID="0633db9d-7cfc-4660-aa53-674582de4b80" Mar 4 00:44:55.061710 systemd[1]: Created slice kubepods-besteffort-pod8b70ced7_e20a_43ff_ab70_0b136d70674a.slice - libcontainer container kubepods-besteffort-pod8b70ced7_e20a_43ff_ab70_0b136d70674a.slice. Mar 4 00:44:55.064239 containerd[1721]: time="2026-03-04T00:44:55.064203172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcgd,Uid:8b70ced7-e20a-43ff-ab70-0b136d70674a,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:55.190807 kubelet[3174]: I0304 00:44:55.190731 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:44:55.192122 containerd[1721]: time="2026-03-04T00:44:55.191863544Z" level=info msg="StopPodSandbox for \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\"" Mar 4 00:44:55.193127 kubelet[3174]: I0304 00:44:55.192506 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:44:55.193493 containerd[1721]: time="2026-03-04T00:44:55.192940624Z" level=info msg="StopPodSandbox for \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\"" Mar 4 00:44:55.194183 containerd[1721]: time="2026-03-04T00:44:55.193630225Z" level=info msg="Ensure that sandbox 2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f in task-service has been cleanup successfully" Mar 4 00:44:55.194640 containerd[1721]: time="2026-03-04T00:44:55.193928345Z" level=info msg="Ensure that sandbox 2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063 in task-service has been cleanup successfully" Mar 4 00:44:55.194780 kubelet[3174]: I0304 00:44:55.194675 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:44:55.196159 containerd[1721]: time="2026-03-04T00:44:55.196062187Z" level=info msg="StopPodSandbox for \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\"" Mar 4 00:44:55.196737 containerd[1721]: time="2026-03-04T00:44:55.196716307Z" level=info msg="Ensure that sandbox a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4 in task-service has been cleanup successfully" Mar 4 00:44:55.203997 kubelet[3174]: I0304 00:44:55.203762 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:44:55.204349 containerd[1721]: time="2026-03-04T00:44:55.204319393Z" level=info msg="StopPodSandbox for \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\"" Mar 4 00:44:55.205835 containerd[1721]: time="2026-03-04T00:44:55.205286993Z" level=info msg="Ensure that sandbox 71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee in task-service has been cleanup successfully" Mar 4 00:44:55.211690 kubelet[3174]: I0304 00:44:55.211278 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:44:55.212013 containerd[1721]: time="2026-03-04T00:44:55.211985678Z" level=info msg="StopPodSandbox for \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\"" Mar 4 00:44:55.212309 containerd[1721]: time="2026-03-04T00:44:55.212290238Z" level=info msg="Ensure that sandbox eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978 in task-service has been cleanup successfully" Mar 4 00:44:55.224240 kubelet[3174]: I0304 00:44:55.223618 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:44:55.225606 containerd[1721]: time="2026-03-04T00:44:55.225569608Z" level=info msg="StopPodSandbox for \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\"" Mar 4 00:44:55.225763 containerd[1721]: time="2026-03-04T00:44:55.225745488Z" level=info msg="Ensure that sandbox e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1 in task-service has been cleanup successfully" Mar 4 00:44:55.227149 kubelet[3174]: I0304 00:44:55.226674 3174 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:44:55.227239 containerd[1721]: time="2026-03-04T00:44:55.227191689Z" level=info msg="StopPodSandbox for \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\"" Mar 4 00:44:55.227512 containerd[1721]: time="2026-03-04T00:44:55.227345049Z" level=info msg="Ensure that sandbox f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a in task-service has been cleanup successfully" Mar 4 00:44:55.232347 kubelet[3174]: I0304 00:44:55.231934 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9bcdf" podStartSLOduration=5.248354462 podStartE2EDuration="23.231919012s" podCreationTimestamp="2026-03-04 00:44:32 +0000 UTC" firstStartedPulling="2026-03-04 00:44:33.370686499 +0000 UTC m=+22.429859118" lastFinishedPulling="2026-03-04 00:44:51.354251049 +0000 UTC m=+40.413423668" observedRunningTime="2026-03-04 00:44:55.22888721 +0000 UTC m=+44.288059789" watchObservedRunningTime="2026-03-04 00:44:55.231919012 +0000 UTC m=+44.291091631" Mar 4 00:44:55.348178 systemd-networkd[1361]: cali861e2e2aef8: Link UP Mar 4 00:44:55.349494 systemd-networkd[1361]: cali861e2e2aef8: Gained carrier Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.119 [ERROR][4319] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.136 [INFO][4319] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0 csi-node-driver- calico-system 8b70ced7-e20a-43ff-ab70-0b136d70674a 704 0 2026-03-04 00:44:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 csi-node-driver-lgcgd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali861e2e2aef8 [] [] }} ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.136 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.160 [INFO][4330] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" HandleID="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Workload="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.170 [INFO][4330] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" HandleID="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Workload="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381d00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"csi-node-driver-lgcgd", "timestamp":"2026-03-04 00:44:55.160988841 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186dc0)} Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.170 [INFO][4330] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.170 [INFO][4330] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.170 [INFO][4330] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.172 [INFO][4330] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.175 [INFO][4330] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.179 [INFO][4330] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.181 [INFO][4330] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.183 [INFO][4330] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.183 [INFO][4330] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.185 [INFO][4330] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.200 [INFO][4330] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.214 [INFO][4330] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.193/26] block=192.168.87.192/26 handle="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.214 [INFO][4330] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.193/26] handle="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.214 [INFO][4330] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.427729 containerd[1721]: 2026-03-04 00:44:55.214 [INFO][4330] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.193/26] IPv6=[] ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" HandleID="k8s-pod-network.159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Workload="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.241 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b70ced7-e20a-43ff-ab70-0b136d70674a", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"csi-node-driver-lgcgd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali861e2e2aef8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.243 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.193/32] ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.244 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali861e2e2aef8 ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.349 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.354 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8b70ced7-e20a-43ff-ab70-0b136d70674a", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a", Pod:"csi-node-driver-lgcgd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali861e2e2aef8", MAC:"b6:0a:18:69:d8:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:55.429126 containerd[1721]: 2026-03-04 00:44:55.417 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a" Namespace="calico-system" Pod="csi-node-driver-lgcgd" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-csi--node--driver--lgcgd-eth0" Mar 4 00:44:55.537618 containerd[1721]: time="2026-03-04T00:44:55.537207432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:55.537618 containerd[1721]: time="2026-03-04T00:44:55.537295632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:55.537618 containerd[1721]: time="2026-03-04T00:44:55.537311632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:55.537618 containerd[1721]: time="2026-03-04T00:44:55.537430352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.369 [INFO][4372] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.374 [INFO][4372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" iface="eth0" netns="/var/run/netns/cni-2f0e0aed-5e5e-2ce2-72c3-8c978af50eab" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.378 [INFO][4372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" iface="eth0" netns="/var/run/netns/cni-2f0e0aed-5e5e-2ce2-72c3-8c978af50eab" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.379 [INFO][4372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" iface="eth0" netns="/var/run/netns/cni-2f0e0aed-5e5e-2ce2-72c3-8c978af50eab" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.379 [INFO][4372] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.379 [INFO][4372] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.529 [INFO][4443] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.529 [INFO][4443] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.529 [INFO][4443] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.566 [WARNING][4443] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.566 [INFO][4443] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.574 [INFO][4443] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.592715 containerd[1721]: 2026-03-04 00:44:55.588 [INFO][4372] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:44:55.595454 containerd[1721]: time="2026-03-04T00:44:55.595225313Z" level=info msg="TearDown network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" successfully" Mar 4 00:44:55.595454 containerd[1721]: time="2026-03-04T00:44:55.595255513Z" level=info msg="StopPodSandbox for \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" returns successfully" Mar 4 00:44:55.597707 systemd[1]: run-netns-cni\x2d2f0e0aed\x2d5e5e\x2d2ce2\x2d72c3\x2d8c978af50eab.mount: Deactivated successfully. Mar 4 00:44:55.599587 containerd[1721]: time="2026-03-04T00:44:55.599186396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-j6sg7,Uid:0633db9d-7cfc-4660-aa53-674582de4b80,Namespace:calico-system,Attempt:1,}" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.397 [INFO][4402] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.398 [INFO][4402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" iface="eth0" netns="/var/run/netns/cni-76c574d2-ac53-5ec1-d360-bfb31eca305b" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.399 [INFO][4402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" iface="eth0" netns="/var/run/netns/cni-76c574d2-ac53-5ec1-d360-bfb31eca305b" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.399 [INFO][4402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" iface="eth0" netns="/var/run/netns/cni-76c574d2-ac53-5ec1-d360-bfb31eca305b" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.399 [INFO][4402] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.399 [INFO][4402] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.575 [INFO][4450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.576 [INFO][4450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.576 [INFO][4450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.593 [WARNING][4450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.593 [INFO][4450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.599 [INFO][4450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.615575 containerd[1721]: 2026-03-04 00:44:55.611 [INFO][4402] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:44:55.616787 containerd[1721]: time="2026-03-04T00:44:55.616476809Z" level=info msg="TearDown network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" successfully" Mar 4 00:44:55.616787 containerd[1721]: time="2026-03-04T00:44:55.616509529Z" level=info msg="StopPodSandbox for \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" returns successfully" Mar 4 00:44:55.628028 systemd[1]: Started cri-containerd-159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a.scope - libcontainer container 159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a. Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.427 [INFO][4378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.431 [INFO][4378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" iface="eth0" netns="/var/run/netns/cni-a6b10276-c0a9-d68a-ba9a-3350b4ca7e39" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.431 [INFO][4378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" iface="eth0" netns="/var/run/netns/cni-a6b10276-c0a9-d68a-ba9a-3350b4ca7e39" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.432 [INFO][4378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" iface="eth0" netns="/var/run/netns/cni-a6b10276-c0a9-d68a-ba9a-3350b4ca7e39" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.432 [INFO][4378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.432 [INFO][4378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.592 [INFO][4459] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.602 [INFO][4459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.602 [INFO][4459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.622 [WARNING][4459] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.626 [INFO][4459] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.631 [INFO][4459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.645732 containerd[1721]: 2026-03-04 00:44:55.640 [INFO][4378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:44:55.647046 containerd[1721]: time="2026-03-04T00:44:55.646890231Z" level=info msg="TearDown network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" successfully" Mar 4 00:44:55.647046 containerd[1721]: time="2026-03-04T00:44:55.646920631Z" level=info msg="StopPodSandbox for \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" returns successfully" Mar 4 00:44:55.647670 containerd[1721]: time="2026-03-04T00:44:55.647563351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-bbvcs,Uid:619b7aec-3809-4bd9-91c1-1f701fc98910,Namespace:calico-system,Attempt:1,}" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.473 [INFO][4409] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.473 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" iface="eth0" netns="/var/run/netns/cni-85bb2dab-e28e-52ac-8dd1-90428f800f66" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.473 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" iface="eth0" netns="/var/run/netns/cni-85bb2dab-e28e-52ac-8dd1-90428f800f66" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.479 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" iface="eth0" netns="/var/run/netns/cni-85bb2dab-e28e-52ac-8dd1-90428f800f66" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.479 [INFO][4409] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.479 [INFO][4409] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.627 [INFO][4472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.631 [INFO][4472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.631 [INFO][4472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.648 [WARNING][4472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.648 [INFO][4472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.650 [INFO][4472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.657316 containerd[1721]: 2026-03-04 00:44:55.653 [INFO][4409] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:44:55.658554 containerd[1721]: time="2026-03-04T00:44:55.657327798Z" level=info msg="TearDown network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" successfully" Mar 4 00:44:55.658554 containerd[1721]: time="2026-03-04T00:44:55.657353998Z" level=info msg="StopPodSandbox for \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" returns successfully" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.499 [INFO][4424] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.500 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" iface="eth0" netns="/var/run/netns/cni-c98c4de0-9069-f1fd-fc1a-01fbabbd6277" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.500 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" iface="eth0" netns="/var/run/netns/cni-c98c4de0-9069-f1fd-fc1a-01fbabbd6277" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.509 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" iface="eth0" netns="/var/run/netns/cni-c98c4de0-9069-f1fd-fc1a-01fbabbd6277" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.509 [INFO][4424] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.509 [INFO][4424] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.651 [INFO][4486] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.651 [INFO][4486] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.651 [INFO][4486] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.667 [WARNING][4486] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.669 [INFO][4486] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.671 [INFO][4486] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.677415 containerd[1721]: 2026-03-04 00:44:55.676 [INFO][4424] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:44:55.678994 containerd[1721]: time="2026-03-04T00:44:55.678081693Z" level=info msg="TearDown network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" successfully" Mar 4 00:44:55.678994 containerd[1721]: time="2026-03-04T00:44:55.678624613Z" level=info msg="StopPodSandbox for \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" returns successfully" Mar 4 00:44:55.682119 containerd[1721]: time="2026-03-04T00:44:55.681482135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5s27l,Uid:a012406e-5d5d-4be6-a07e-5f6094f9e248,Namespace:kube-system,Attempt:1,}" Mar 4 00:44:55.696121 containerd[1721]: time="2026-03-04T00:44:55.695799106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjwpl,Uid:892fe9de-db99-487d-bc78-cf3eb17d4d4b,Namespace:kube-system,Attempt:1,}" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.489 [INFO][4361] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.491 [INFO][4361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" iface="eth0" netns="/var/run/netns/cni-9b4eba23-5484-ae6b-9424-7e158f2874f3" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.492 [INFO][4361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" iface="eth0" netns="/var/run/netns/cni-9b4eba23-5484-ae6b-9424-7e158f2874f3" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.495 [INFO][4361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" iface="eth0" netns="/var/run/netns/cni-9b4eba23-5484-ae6b-9424-7e158f2874f3" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.495 [INFO][4361] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.495 [INFO][4361] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.651 [INFO][4481] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.654 [INFO][4481] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.671 [INFO][4481] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.690 [WARNING][4481] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.690 [INFO][4481] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.693 [INFO][4481] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.702338 containerd[1721]: 2026-03-04 00:44:55.699 [INFO][4361] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:44:55.709144 containerd[1721]: time="2026-03-04T00:44:55.708230435Z" level=info msg="TearDown network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" successfully" Mar 4 00:44:55.709144 containerd[1721]: time="2026-03-04T00:44:55.708277515Z" level=info msg="StopPodSandbox for \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" returns successfully" Mar 4 00:44:55.713202 systemd[1]: run-netns-cni\x2da6b10276\x2dc0a9\x2dd68a\x2dba9a\x2d3350b4ca7e39.mount: Deactivated successfully. Mar 4 00:44:55.713449 systemd[1]: run-netns-cni\x2d76c574d2\x2dac53\x2d5ec1\x2dd360\x2dbfb31eca305b.mount: Deactivated successfully. Mar 4 00:44:55.713536 systemd[1]: run-netns-cni\x2d85bb2dab\x2de28e\x2d52ac\x2d8dd1\x2d90428f800f66.mount: Deactivated successfully. Mar 4 00:44:55.713746 systemd[1]: run-netns-cni\x2dc98c4de0\x2d9069\x2df1fd\x2dfc1a\x2d01fbabbd6277.mount: Deactivated successfully. Mar 4 00:44:55.716421 containerd[1721]: time="2026-03-04T00:44:55.714345599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mnq4v,Uid:a41f2112-b709-42a8-b7ef-8a376aba0a93,Namespace:calico-system,Attempt:1,}" Mar 4 00:44:55.719486 systemd[1]: run-netns-cni\x2d9b4eba23\x2d5484\x2dae6b\x2d9424\x2d7e158f2874f3.mount: Deactivated successfully. Mar 4 00:44:55.741269 kubelet[3174]: I0304 00:44:55.741230 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-backend-key-pair\") pod \"71f59a49-90d2-4128-9e10-49da32cff0d5\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " Mar 4 00:44:55.741436 kubelet[3174]: I0304 00:44:55.741287 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-nginx-config\") pod \"71f59a49-90d2-4128-9e10-49da32cff0d5\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " Mar 4 00:44:55.741436 kubelet[3174]: I0304 00:44:55.741318 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-ca-bundle\") pod \"71f59a49-90d2-4128-9e10-49da32cff0d5\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " Mar 4 00:44:55.741436 kubelet[3174]: I0304 00:44:55.741345 3174 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jf2t\" (UniqueName: \"kubernetes.io/projected/71f59a49-90d2-4128-9e10-49da32cff0d5-kube-api-access-2jf2t\") pod \"71f59a49-90d2-4128-9e10-49da32cff0d5\" (UID: \"71f59a49-90d2-4128-9e10-49da32cff0d5\") " Mar 4 00:44:55.741968 kubelet[3174]: I0304 00:44:55.741744 3174 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "71f59a49-90d2-4128-9e10-49da32cff0d5" (UID: "71f59a49-90d2-4128-9e10-49da32cff0d5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:44:55.743362 kubelet[3174]: I0304 00:44:55.742294 3174 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-nginx-config\") on node \"ci-4081.3.6-n-d3c3414975\" DevicePath \"\"" Mar 4 00:44:55.743362 kubelet[3174]: I0304 00:44:55.742566 3174 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "71f59a49-90d2-4128-9e10-49da32cff0d5" (UID: "71f59a49-90d2-4128-9e10-49da32cff0d5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 00:44:55.748601 systemd[1]: var-lib-kubelet-pods-71f59a49\x2d90d2\x2d4128\x2d9e10\x2d49da32cff0d5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 00:44:55.751811 kubelet[3174]: I0304 00:44:55.751769 3174 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "71f59a49-90d2-4128-9e10-49da32cff0d5" (UID: "71f59a49-90d2-4128-9e10-49da32cff0d5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.522 [INFO][4415] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.522 [INFO][4415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" iface="eth0" netns="/var/run/netns/cni-0f8dab1d-c162-d457-d375-aea8bf57afd8" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.524 [INFO][4415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" iface="eth0" netns="/var/run/netns/cni-0f8dab1d-c162-d457-d375-aea8bf57afd8" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.530 [INFO][4415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" iface="eth0" netns="/var/run/netns/cni-0f8dab1d-c162-d457-d375-aea8bf57afd8" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.530 [INFO][4415] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.530 [INFO][4415] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.663 [INFO][4498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.665 [INFO][4498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.694 [INFO][4498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.726 [WARNING][4498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.726 [INFO][4498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.729 [INFO][4498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:55.752229 containerd[1721]: 2026-03-04 00:44:55.740 [INFO][4415] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:44:55.755266 containerd[1721]: time="2026-03-04T00:44:55.753420187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcgd,Uid:8b70ced7-e20a-43ff-ab70-0b136d70674a,Namespace:calico-system,Attempt:0,} returns sandbox id \"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a\"" Mar 4 00:44:55.756101 systemd[1]: var-lib-kubelet-pods-71f59a49\x2d90d2\x2d4128\x2d9e10\x2d49da32cff0d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jf2t.mount: Deactivated successfully. Mar 4 00:44:55.757238 kubelet[3174]: I0304 00:44:55.756950 3174 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f59a49-90d2-4128-9e10-49da32cff0d5-kube-api-access-2jf2t" (OuterVolumeSpecName: "kube-api-access-2jf2t") pod "71f59a49-90d2-4128-9e10-49da32cff0d5" (UID: "71f59a49-90d2-4128-9e10-49da32cff0d5"). InnerVolumeSpecName "kube-api-access-2jf2t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 00:44:55.761277 systemd[1]: run-netns-cni\x2d0f8dab1d\x2dc162\x2dd457\x2dd375\x2daea8bf57afd8.mount: Deactivated successfully. Mar 4 00:44:55.771555 containerd[1721]: time="2026-03-04T00:44:55.771497200Z" level=info msg="TearDown network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" successfully" Mar 4 00:44:55.771555 containerd[1721]: time="2026-03-04T00:44:55.771541800Z" level=info msg="StopPodSandbox for \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" returns successfully" Mar 4 00:44:55.774831 containerd[1721]: time="2026-03-04T00:44:55.774602842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 00:44:55.775191 containerd[1721]: time="2026-03-04T00:44:55.774948723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54988d8f9b-q276v,Uid:1026cf72-bf90-4cde-98d0-93ad03abe950,Namespace:calico-system,Attempt:1,}" Mar 4 00:44:55.843446 kubelet[3174]: I0304 00:44:55.843403 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-d3c3414975\" DevicePath \"\"" Mar 4 00:44:55.843446 kubelet[3174]: I0304 00:44:55.843439 3174 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71f59a49-90d2-4128-9e10-49da32cff0d5-whisker-ca-bundle\") on node \"ci-4081.3.6-n-d3c3414975\" DevicePath \"\"" Mar 4 00:44:55.843446 kubelet[3174]: I0304 00:44:55.843450 3174 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jf2t\" (UniqueName: \"kubernetes.io/projected/71f59a49-90d2-4128-9e10-49da32cff0d5-kube-api-access-2jf2t\") on node \"ci-4081.3.6-n-d3c3414975\" DevicePath \"\"" Mar 4 00:44:56.040870 systemd-networkd[1361]: cali707db5dd15b: Link UP Mar 4 00:44:56.041090 systemd-networkd[1361]: cali707db5dd15b: Gained carrier Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.766 [ERROR][4531] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.790 [INFO][4531] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0 calico-apiserver-6c457fd865- calico-system 0633db9d-7cfc-4660-aa53-674582de4b80 896 0 2026-03-04 00:44:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c457fd865 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 calico-apiserver-6c457fd865-j6sg7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali707db5dd15b [] [] }} ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.790 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.877 [INFO][4563] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" HandleID="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.911 [INFO][4563] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" HandleID="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004deb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"calico-apiserver-6c457fd865-j6sg7", "timestamp":"2026-03-04 00:44:55.877677596 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400041d8c0)} Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.911 [INFO][4563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.911 [INFO][4563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.911 [INFO][4563] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.923 [INFO][4563] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.940 [INFO][4563] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.955 [INFO][4563] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.962 [INFO][4563] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.966 [INFO][4563] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.968 [INFO][4563] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.975 [INFO][4563] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2 Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:55.988 [INFO][4563] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:56.005 [INFO][4563] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.194/26] block=192.168.87.192/26 handle="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:56.005 [INFO][4563] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.194/26] handle="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:56.006 [INFO][4563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.091039 containerd[1721]: 2026-03-04 00:44:56.007 [INFO][4563] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.194/26] IPv6=[] ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" HandleID="k8s-pod-network.9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.021 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"0633db9d-7cfc-4660-aa53-674582de4b80", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"calico-apiserver-6c457fd865-j6sg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali707db5dd15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.021 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.194/32] ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.021 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali707db5dd15b ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.043 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.051 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"0633db9d-7cfc-4660-aa53-674582de4b80", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2", Pod:"calico-apiserver-6c457fd865-j6sg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali707db5dd15b", MAC:"d6:3e:6c:32:92:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.091919 containerd[1721]: 2026-03-04 00:44:56.077 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-j6sg7" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:44:56.187307 systemd-networkd[1361]: calie71069e8008: Link UP Mar 4 00:44:56.200304 systemd-networkd[1361]: calie71069e8008: Gained carrier Mar 4 00:44:56.229050 containerd[1721]: time="2026-03-04T00:44:56.228531848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.229050 containerd[1721]: time="2026-03-04T00:44:56.228595889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.229050 containerd[1721]: time="2026-03-04T00:44:56.228611889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.229050 containerd[1721]: time="2026-03-04T00:44:56.228691209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:55.790 [ERROR][4542] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:55.809 [INFO][4542] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0 calico-apiserver-6c457fd865- calico-system 619b7aec-3809-4bd9-91c1-1f701fc98910 899 0 2026-03-04 00:44:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c457fd865 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 calico-apiserver-6c457fd865-bbvcs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie71069e8008 [] [] }} ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:55.810 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.023 [INFO][4590] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" HandleID="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.063 [INFO][4590] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" HandleID="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fecc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"calico-apiserver-6c457fd865-bbvcs", "timestamp":"2026-03-04 00:44:56.023234141 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000406dc0)} Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.064 [INFO][4590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.064 [INFO][4590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.065 [INFO][4590] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.074 [INFO][4590] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.089 [INFO][4590] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.099 [INFO][4590] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.106 [INFO][4590] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.113 [INFO][4590] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.113 [INFO][4590] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.116 [INFO][4590] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9 Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.124 [INFO][4590] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.145 [INFO][4590] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.195/26] block=192.168.87.192/26 handle="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.145 [INFO][4590] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.195/26] handle="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.146 [INFO][4590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.254439 containerd[1721]: 2026-03-04 00:44:56.146 [INFO][4590] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.195/26] IPv6=[] ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" HandleID="k8s-pod-network.4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.156 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"619b7aec-3809-4bd9-91c1-1f701fc98910", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"calico-apiserver-6c457fd865-bbvcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie71069e8008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.156 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.195/32] ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.156 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie71069e8008 ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.204 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.215 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"619b7aec-3809-4bd9-91c1-1f701fc98910", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9", Pod:"calico-apiserver-6c457fd865-bbvcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie71069e8008", MAC:"c6:ea:16:28:60:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.256067 containerd[1721]: 2026-03-04 00:44:56.236 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9" Namespace="calico-system" Pod="calico-apiserver-6c457fd865-bbvcs" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:44:56.257233 systemd[1]: Removed slice kubepods-besteffort-pod71f59a49_90d2_4128_9e10_49da32cff0d5.slice - libcontainer container kubepods-besteffort-pod71f59a49_90d2_4128_9e10_49da32cff0d5.slice. Mar 4 00:44:56.335941 systemd[1]: Started cri-containerd-9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2.scope - libcontainer container 9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2. Mar 4 00:44:56.365817 containerd[1721]: time="2026-03-04T00:44:56.359976663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.365817 containerd[1721]: time="2026-03-04T00:44:56.360056743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.365817 containerd[1721]: time="2026-03-04T00:44:56.360113583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.365817 containerd[1721]: time="2026-03-04T00:44:56.360803903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.372365 systemd-networkd[1361]: calic098b2d6db9: Link UP Mar 4 00:44:56.372597 systemd-networkd[1361]: calic098b2d6db9: Gained carrier Mar 4 00:44:56.414364 systemd[1]: Started cri-containerd-4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9.scope - libcontainer container 4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9. Mar 4 00:44:56.434084 systemd[1]: Created slice kubepods-besteffort-pod66273ef3_9826_4866_8709_ff77393cc045.slice - libcontainer container kubepods-besteffort-pod66273ef3_9826_4866_8709_ff77393cc045.slice. Mar 4 00:44:56.447815 kubelet[3174]: I0304 00:44:56.447782 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66273ef3-9826-4866-8709-ff77393cc045-whisker-backend-key-pair\") pod \"whisker-58c5f4f849-n9str\" (UID: \"66273ef3-9826-4866-8709-ff77393cc045\") " pod="calico-system/whisker-58c5f4f849-n9str" Mar 4 00:44:56.448397 kubelet[3174]: I0304 00:44:56.448255 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66273ef3-9826-4866-8709-ff77393cc045-whisker-ca-bundle\") pod \"whisker-58c5f4f849-n9str\" (UID: \"66273ef3-9826-4866-8709-ff77393cc045\") " pod="calico-system/whisker-58c5f4f849-n9str" Mar 4 00:44:56.448397 kubelet[3174]: I0304 00:44:56.448300 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f456b\" (UniqueName: \"kubernetes.io/projected/66273ef3-9826-4866-8709-ff77393cc045-kube-api-access-f456b\") pod \"whisker-58c5f4f849-n9str\" (UID: \"66273ef3-9826-4866-8709-ff77393cc045\") " pod="calico-system/whisker-58c5f4f849-n9str" Mar 4 00:44:56.448397 kubelet[3174]: I0304 00:44:56.448362 3174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/66273ef3-9826-4866-8709-ff77393cc045-nginx-config\") pod \"whisker-58c5f4f849-n9str\" (UID: \"66273ef3-9826-4866-8709-ff77393cc045\") " pod="calico-system/whisker-58c5f4f849-n9str" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:55.915 [ERROR][4567] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:55.963 [INFO][4567] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0 coredns-674b8bbfcf- kube-system a012406e-5d5d-4be6-a07e-5f6094f9e248 902 0 2026-03-04 00:44:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 coredns-674b8bbfcf-5s27l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic098b2d6db9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:55.966 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.154 [INFO][4674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" HandleID="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.216 [INFO][4674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" HandleID="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"coredns-674b8bbfcf-5s27l", "timestamp":"2026-03-04 00:44:56.154024235 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000353340)} Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.216 [INFO][4674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.217 [INFO][4674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.217 [INFO][4674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.222 [INFO][4674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.245 [INFO][4674] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.272 [INFO][4674] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.285 [INFO][4674] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.302 [INFO][4674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.302 [INFO][4674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.312 [INFO][4674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3 Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.332 [INFO][4674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.350 [INFO][4674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.196/26] block=192.168.87.192/26 handle="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.350 [INFO][4674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.196/26] handle="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.350 [INFO][4674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.454206 containerd[1721]: 2026-03-04 00:44:56.350 [INFO][4674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.196/26] IPv6=[] ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" HandleID="k8s-pod-network.5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.357 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a012406e-5d5d-4be6-a07e-5f6094f9e248", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"coredns-674b8bbfcf-5s27l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic098b2d6db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.357 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.196/32] ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.357 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic098b2d6db9 ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.374 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.384 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a012406e-5d5d-4be6-a07e-5f6094f9e248", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3", Pod:"coredns-674b8bbfcf-5s27l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic098b2d6db9", MAC:"52:b7:76:30:d0:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.454733 containerd[1721]: 2026-03-04 00:44:56.451 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3" Namespace="kube-system" Pod="coredns-674b8bbfcf-5s27l" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:44:56.480246 systemd-networkd[1361]: cali861e2e2aef8: Gained IPv6LL Mar 4 00:44:56.487802 containerd[1721]: time="2026-03-04T00:44:56.487548414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.487995 containerd[1721]: time="2026-03-04T00:44:56.487774015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.487995 containerd[1721]: time="2026-03-04T00:44:56.487969575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.488294 containerd[1721]: time="2026-03-04T00:44:56.488204135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.528731 systemd[1]: Started cri-containerd-5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3.scope - libcontainer container 5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3. Mar 4 00:44:56.536537 containerd[1721]: time="2026-03-04T00:44:56.535783799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-bbvcs,Uid:619b7aec-3809-4bd9-91c1-1f701fc98910,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9\"" Mar 4 00:44:56.557629 systemd-networkd[1361]: cali3c3d2113b49: Link UP Mar 4 00:44:56.561250 systemd-networkd[1361]: cali3c3d2113b49: Gained carrier Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:55.969 [ERROR][4601] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.043 [INFO][4601] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0 goldmane-5b85766d88- calico-system a41f2112-b709-42a8-b7ef-8a376aba0a93 901 0 2026-03-04 00:44:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 goldmane-5b85766d88-mnq4v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3c3d2113b49 [] [] }} ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.044 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.266 [INFO][4698] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" HandleID="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.318 [INFO][4698] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" HandleID="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec540), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"goldmane-5b85766d88-mnq4v", "timestamp":"2026-03-04 00:44:56.266723756 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010ef20)} Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.318 [INFO][4698] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.351 [INFO][4698] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.351 [INFO][4698] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.365 [INFO][4698] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.433 [INFO][4698] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.470 [INFO][4698] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.480 [INFO][4698] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.495 [INFO][4698] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.496 [INFO][4698] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.502 [INFO][4698] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.511 [INFO][4698] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.535 [INFO][4698] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.197/26] block=192.168.87.192/26 handle="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.535 [INFO][4698] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.197/26] handle="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.535 [INFO][4698] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.599921 containerd[1721]: 2026-03-04 00:44:56.535 [INFO][4698] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.197/26] IPv6=[] ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" HandleID="k8s-pod-network.a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.547 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a41f2112-b709-42a8-b7ef-8a376aba0a93", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"goldmane-5b85766d88-mnq4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c3d2113b49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.548 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.197/32] ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.548 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c3d2113b49 ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.560 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.564 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a41f2112-b709-42a8-b7ef-8a376aba0a93", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d", Pod:"goldmane-5b85766d88-mnq4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c3d2113b49", MAC:"0e:03:e5:db:6c:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.600468 containerd[1721]: 2026-03-04 00:44:56.594 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d" Namespace="calico-system" Pod="goldmane-5b85766d88-mnq4v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:44:56.622277 containerd[1721]: time="2026-03-04T00:44:56.621911114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5s27l,Uid:a012406e-5d5d-4be6-a07e-5f6094f9e248,Namespace:kube-system,Attempt:1,} returns sandbox id \"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3\"" Mar 4 00:44:56.634769 containerd[1721]: time="2026-03-04T00:44:56.633805130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c457fd865-j6sg7,Uid:0633db9d-7cfc-4660-aa53-674582de4b80,Namespace:calico-system,Attempt:1,} returns sandbox id \"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2\"" Mar 4 00:44:56.643914 systemd-networkd[1361]: cali345d6e7df87: Link UP Mar 4 00:44:56.644563 systemd-networkd[1361]: cali345d6e7df87: Gained carrier Mar 4 00:44:56.649633 containerd[1721]: time="2026-03-04T00:44:56.648880590Z" level=info msg="CreateContainer within sandbox \"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.128 [ERROR][4662] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.156 [INFO][4662] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0 calico-kube-controllers-54988d8f9b- calico-system 1026cf72-bf90-4cde-98d0-93ad03abe950 904 0 2026-03-04 00:44:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54988d8f9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 calico-kube-controllers-54988d8f9b-q276v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali345d6e7df87 [] [] }} ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.156 [INFO][4662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.283 [INFO][4746] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" HandleID="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.326 [INFO][4746] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" HandleID="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"calico-kube-controllers-54988d8f9b-q276v", "timestamp":"2026-03-04 00:44:56.283656208 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002294a0)} Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.326 [INFO][4746] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.536 [INFO][4746] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.536 [INFO][4746] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.549 [INFO][4746] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.579 [INFO][4746] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.584 [INFO][4746] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.592 [INFO][4746] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.603 [INFO][4746] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.603 [INFO][4746] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.607 [INFO][4746] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.621 [INFO][4746] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.637 [INFO][4746] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.198/26] block=192.168.87.192/26 handle="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.637 [INFO][4746] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.198/26] handle="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.637 [INFO][4746] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.672389 containerd[1721]: 2026-03-04 00:44:56.637 [INFO][4746] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.198/26] IPv6=[] ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" HandleID="k8s-pod-network.5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.641 [INFO][4662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0", GenerateName:"calico-kube-controllers-54988d8f9b-", Namespace:"calico-system", SelfLink:"", UID:"1026cf72-bf90-4cde-98d0-93ad03abe950", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54988d8f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"calico-kube-controllers-54988d8f9b-q276v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali345d6e7df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.641 [INFO][4662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.198/32] ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.641 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali345d6e7df87 ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.644 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.645 [INFO][4662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0", GenerateName:"calico-kube-controllers-54988d8f9b-", Namespace:"calico-system", SelfLink:"", UID:"1026cf72-bf90-4cde-98d0-93ad03abe950", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54988d8f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc", Pod:"calico-kube-controllers-54988d8f9b-q276v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali345d6e7df87", MAC:"d6:41:8a:9a:f4:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.675174 containerd[1721]: 2026-03-04 00:44:56.667 [INFO][4662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc" Namespace="calico-system" Pod="calico-kube-controllers-54988d8f9b-q276v" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:44:56.691438 containerd[1721]: time="2026-03-04T00:44:56.690958686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.691438 containerd[1721]: time="2026-03-04T00:44:56.691018686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.691438 containerd[1721]: time="2026-03-04T00:44:56.691034806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.691438 containerd[1721]: time="2026-03-04T00:44:56.691202046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.733570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375844420.mount: Deactivated successfully. Mar 4 00:44:56.740335 containerd[1721]: time="2026-03-04T00:44:56.738895470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c5f4f849-n9str,Uid:66273ef3-9826-4866-8709-ff77393cc045,Namespace:calico-system,Attempt:0,}" Mar 4 00:44:56.761998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745043129.mount: Deactivated successfully. Mar 4 00:44:56.790506 systemd[1]: Started cri-containerd-a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d.scope - libcontainer container a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d. Mar 4 00:44:56.794635 systemd-networkd[1361]: cali9e35d12aa82: Link UP Mar 4 00:44:56.794760 systemd-networkd[1361]: cali9e35d12aa82: Gained carrier Mar 4 00:44:56.798612 containerd[1721]: time="2026-03-04T00:44:56.798342389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.798612 containerd[1721]: time="2026-03-04T00:44:56.798393109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.798612 containerd[1721]: time="2026-03-04T00:44:56.798414229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.798612 containerd[1721]: time="2026-03-04T00:44:56.798495470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.825167 containerd[1721]: time="2026-03-04T00:44:56.824084064Z" level=info msg="CreateContainer within sandbox \"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b195f5c99ecda68ad73b2b4a76efda65ae7963d04999821b8916d97403117d5\"" Mar 4 00:44:56.825876 containerd[1721]: time="2026-03-04T00:44:56.825833986Z" level=info msg="StartContainer for \"5b195f5c99ecda68ad73b2b4a76efda65ae7963d04999821b8916d97403117d5\"" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.132 [ERROR][4646] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.178 [INFO][4646] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0 coredns-674b8bbfcf- kube-system 892fe9de-db99-487d-bc78-cf3eb17d4d4b 900 0 2026-03-04 00:44:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 coredns-674b8bbfcf-rjwpl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e35d12aa82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.181 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.324 [INFO][4752] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" HandleID="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.369 [INFO][4752] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" HandleID="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f7780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"coredns-674b8bbfcf-rjwpl", "timestamp":"2026-03-04 00:44:56.324149597 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000318420)} Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.369 [INFO][4752] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.637 [INFO][4752] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.638 [INFO][4752] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.646 [INFO][4752] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.681 [INFO][4752] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.690 [INFO][4752] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.694 [INFO][4752] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.715 [INFO][4752] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.715 [INFO][4752] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.737 [INFO][4752] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.754 [INFO][4752] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.773 [INFO][4752] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.199/26] block=192.168.87.192/26 handle="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.773 [INFO][4752] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.199/26] handle="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.773 [INFO][4752] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:56.827660 containerd[1721]: 2026-03-04 00:44:56.773 [INFO][4752] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.199/26] IPv6=[] ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" HandleID="k8s-pod-network.99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.790 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"892fe9de-db99-487d-bc78-cf3eb17d4d4b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"coredns-674b8bbfcf-rjwpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e35d12aa82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.790 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.199/32] ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.790 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e35d12aa82 ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.795 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.796 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"892fe9de-db99-487d-bc78-cf3eb17d4d4b", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e", Pod:"coredns-674b8bbfcf-rjwpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e35d12aa82", MAC:"c2:08:8c:21:81:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:56.830167 containerd[1721]: 2026-03-04 00:44:56.821 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e" Namespace="kube-system" Pod="coredns-674b8bbfcf-rjwpl" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:44:56.847515 systemd[1]: Started cri-containerd-5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc.scope - libcontainer container 5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc. Mar 4 00:44:56.921248 containerd[1721]: time="2026-03-04T00:44:56.920696593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:56.921248 containerd[1721]: time="2026-03-04T00:44:56.920757073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:56.921248 containerd[1721]: time="2026-03-04T00:44:56.920771673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.921248 containerd[1721]: time="2026-03-04T00:44:56.920852833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:56.933299 systemd[1]: Started cri-containerd-5b195f5c99ecda68ad73b2b4a76efda65ae7963d04999821b8916d97403117d5.scope - libcontainer container 5b195f5c99ecda68ad73b2b4a76efda65ae7963d04999821b8916d97403117d5. Mar 4 00:44:56.966270 systemd[1]: Started cri-containerd-99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e.scope - libcontainer container 99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e. Mar 4 00:44:57.073783 containerd[1721]: time="2026-03-04T00:44:57.073628637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mnq4v,Uid:a41f2112-b709-42a8-b7ef-8a376aba0a93,Namespace:calico-system,Attempt:1,} returns sandbox id \"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d\"" Mar 4 00:44:57.074146 containerd[1721]: time="2026-03-04T00:44:57.074095198Z" level=info msg="StartContainer for \"5b195f5c99ecda68ad73b2b4a76efda65ae7963d04999821b8916d97403117d5\" returns successfully" Mar 4 00:44:57.074349 containerd[1721]: time="2026-03-04T00:44:57.074324998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54988d8f9b-q276v,Uid:1026cf72-bf90-4cde-98d0-93ad03abe950,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc\"" Mar 4 00:44:57.078284 kubelet[3174]: I0304 00:44:57.075932 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f59a49-90d2-4128-9e10-49da32cff0d5" path="/var/lib/kubelet/pods/71f59a49-90d2-4128-9e10-49da32cff0d5/volumes" Mar 4 00:44:57.090385 containerd[1721]: time="2026-03-04T00:44:57.090350259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjwpl,Uid:892fe9de-db99-487d-bc78-cf3eb17d4d4b,Namespace:kube-system,Attempt:1,} returns sandbox id \"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e\"" Mar 4 00:44:57.103781 containerd[1721]: time="2026-03-04T00:44:57.103746917Z" level=info msg="CreateContainer within sandbox \"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 00:44:57.176770 systemd-networkd[1361]: calieb78f8a60b7: Link UP Mar 4 00:44:57.180463 systemd-networkd[1361]: calieb78f8a60b7: Gained carrier Mar 4 00:44:57.182066 containerd[1721]: time="2026-03-04T00:44:57.181969382Z" level=info msg="CreateContainer within sandbox \"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17ed416ff3afe1867385d331f436774484f8396f4f1571eeb39158781c3362f1\"" Mar 4 00:44:57.185166 containerd[1721]: time="2026-03-04T00:44:57.184340225Z" level=info msg="StartContainer for \"17ed416ff3afe1867385d331f436774484f8396f4f1571eeb39158781c3362f1\"" Mar 4 00:44:57.208054 kubelet[3174]: I0304 00:44:57.208018 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:44:57.217125 kernel: calico-node[4616]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:56.898 [ERROR][4950] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:56.943 [INFO][4950] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0 whisker-58c5f4f849- calico-system 66273ef3-9826-4866-8709-ff77393cc045 930 0 2026-03-04 00:44:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58c5f4f849 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-d3c3414975 whisker-58c5f4f849-n9str eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb78f8a60b7 [] [] }} ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:56.943 [INFO][4950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.005 [INFO][5046] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" HandleID="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.038 [INFO][5046] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" HandleID="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d3c3414975", "pod":"whisker-58c5f4f849-n9str", "timestamp":"2026-03-04 00:44:57.005535626 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d3c3414975", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.071 [INFO][5046] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.071 [INFO][5046] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.072 [INFO][5046] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d3c3414975' Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.113 [INFO][5046] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.128 [INFO][5046] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.134 [INFO][5046] ipam/ipam.go 526: Trying affinity for 192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.138 [INFO][5046] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.141 [INFO][5046] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.141 [INFO][5046] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.144 [INFO][5046] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375 Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.149 [INFO][5046] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.166 [INFO][5046] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.200/26] block=192.168.87.192/26 handle="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.167 [INFO][5046] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.200/26] handle="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" host="ci-4081.3.6-n-d3c3414975" Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.167 [INFO][5046] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:44:57.217656 containerd[1721]: 2026-03-04 00:44:57.167 [INFO][5046] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.200/26] IPv6=[] ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" HandleID="k8s-pod-network.32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.172 [INFO][4950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0", GenerateName:"whisker-58c5f4f849-", Namespace:"calico-system", SelfLink:"", UID:"66273ef3-9826-4866-8709-ff77393cc045", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58c5f4f849", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"", Pod:"whisker-58c5f4f849-n9str", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb78f8a60b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.173 [INFO][4950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.200/32] ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.173 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb78f8a60b7 ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.183 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.189 [INFO][4950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0", GenerateName:"whisker-58c5f4f849-", Namespace:"calico-system", SelfLink:"", UID:"66273ef3-9826-4866-8709-ff77393cc045", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58c5f4f849", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375", Pod:"whisker-58c5f4f849-n9str", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb78f8a60b7", MAC:"52:bb:8d:e5:e6:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:44:57.218852 containerd[1721]: 2026-03-04 00:44:57.210 [INFO][4950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375" Namespace="calico-system" Pod="whisker-58c5f4f849-n9str" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--58c5f4f849--n9str-eth0" Mar 4 00:44:57.232310 systemd[1]: Started cri-containerd-17ed416ff3afe1867385d331f436774484f8396f4f1571eeb39158781c3362f1.scope - libcontainer container 17ed416ff3afe1867385d331f436774484f8396f4f1571eeb39158781c3362f1. Mar 4 00:44:57.314203 containerd[1721]: time="2026-03-04T00:44:57.312887836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 00:44:57.314203 containerd[1721]: time="2026-03-04T00:44:57.312948436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 00:44:57.314203 containerd[1721]: time="2026-03-04T00:44:57.312973317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:57.314628 containerd[1721]: time="2026-03-04T00:44:57.313066917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 00:44:57.332680 containerd[1721]: time="2026-03-04T00:44:57.332474943Z" level=info msg="StartContainer for \"17ed416ff3afe1867385d331f436774484f8396f4f1571eeb39158781c3362f1\" returns successfully" Mar 4 00:44:57.387476 kubelet[3174]: I0304 00:44:57.387336 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5s27l" podStartSLOduration=39.387317056 podStartE2EDuration="39.387317056s" podCreationTimestamp="2026-03-04 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:57.386810735 +0000 UTC m=+46.445983354" watchObservedRunningTime="2026-03-04 00:44:57.387317056 +0000 UTC m=+46.446489635" Mar 4 00:44:57.412425 systemd[1]: Started cri-containerd-32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375.scope - libcontainer container 32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375. Mar 4 00:44:57.547503 containerd[1721]: time="2026-03-04T00:44:57.547375510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c5f4f849-n9str,Uid:66273ef3-9826-4866-8709-ff77393cc045,Namespace:calico-system,Attempt:0,} returns sandbox id \"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375\"" Mar 4 00:44:57.760224 systemd-networkd[1361]: cali707db5dd15b: Gained IPv6LL Mar 4 00:44:57.888238 systemd-networkd[1361]: cali3c3d2113b49: Gained IPv6LL Mar 4 00:44:57.888948 systemd-networkd[1361]: calic098b2d6db9: Gained IPv6LL Mar 4 00:44:57.952482 systemd-networkd[1361]: calie71069e8008: Gained IPv6LL Mar 4 00:44:57.985354 systemd-networkd[1361]: vxlan.calico: Link UP Mar 4 00:44:57.985362 systemd-networkd[1361]: vxlan.calico: Gained carrier Mar 4 00:44:58.398705 kubelet[3174]: I0304 00:44:58.398378 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rjwpl" podStartSLOduration=40.398357966 podStartE2EDuration="40.398357966s" podCreationTimestamp="2026-03-04 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 00:44:58.397165364 +0000 UTC m=+47.456337983" watchObservedRunningTime="2026-03-04 00:44:58.398357966 +0000 UTC m=+47.457530545" Mar 4 00:44:58.494797 containerd[1721]: time="2026-03-04T00:44:58.494263054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:58.497751 containerd[1721]: time="2026-03-04T00:44:58.497716059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 4 00:44:58.500532 containerd[1721]: time="2026-03-04T00:44:58.500498542Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:58.505911 containerd[1721]: time="2026-03-04T00:44:58.505257149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:44:58.506532 containerd[1721]: time="2026-03-04T00:44:58.506482630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.731841708s" Mar 4 00:44:58.506684 containerd[1721]: time="2026-03-04T00:44:58.506666550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 4 00:44:58.509235 containerd[1721]: time="2026-03-04T00:44:58.508308473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:44:58.516923 containerd[1721]: time="2026-03-04T00:44:58.516654884Z" level=info msg="CreateContainer within sandbox \"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 00:44:58.564001 containerd[1721]: time="2026-03-04T00:44:58.563957987Z" level=info msg="CreateContainer within sandbox \"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e\"" Mar 4 00:44:58.565134 containerd[1721]: time="2026-03-04T00:44:58.564501428Z" level=info msg="StartContainer for \"6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e\"" Mar 4 00:44:58.592790 systemd-networkd[1361]: cali345d6e7df87: Gained IPv6LL Mar 4 00:44:58.599271 systemd[1]: Started cri-containerd-6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e.scope - libcontainer container 6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e. Mar 4 00:44:58.631198 containerd[1721]: time="2026-03-04T00:44:58.631156877Z" level=info msg="StartContainer for \"6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e\" returns successfully" Mar 4 00:44:58.656218 systemd-networkd[1361]: calieb78f8a60b7: Gained IPv6LL Mar 4 00:44:58.708759 systemd[1]: run-containerd-runc-k8s.io-6ce5af1a54824ab291e7a53825a24d5ce81e4a893efc8d988958ca88e82fef4e-runc.djacif.mount: Deactivated successfully. Mar 4 00:44:58.848261 systemd-networkd[1361]: cali9e35d12aa82: Gained IPv6LL Mar 4 00:44:59.552216 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Mar 4 00:45:01.039099 containerd[1721]: time="2026-03-04T00:45:01.039046052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:01.041993 containerd[1721]: time="2026-03-04T00:45:01.041760936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 4 00:45:01.045139 containerd[1721]: time="2026-03-04T00:45:01.045092700Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:01.050146 containerd[1721]: time="2026-03-04T00:45:01.050079227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:01.051144 containerd[1721]: time="2026-03-04T00:45:01.051012988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.542669235s" Mar 4 00:45:01.051144 containerd[1721]: time="2026-03-04T00:45:01.051047468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:45:01.054854 containerd[1721]: time="2026-03-04T00:45:01.053770952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 00:45:01.059814 containerd[1721]: time="2026-03-04T00:45:01.059778320Z" level=info msg="CreateContainer within sandbox \"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:45:01.086835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845464628.mount: Deactivated successfully. Mar 4 00:45:01.096816 containerd[1721]: time="2026-03-04T00:45:01.096085888Z" level=info msg="CreateContainer within sandbox \"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c198b238cacf6ba9f360b92535106764387f30cf9ab5e7e86ab7a04240729c5\"" Mar 4 00:45:01.097591 containerd[1721]: time="2026-03-04T00:45:01.097500250Z" level=info msg="StartContainer for \"9c198b238cacf6ba9f360b92535106764387f30cf9ab5e7e86ab7a04240729c5\"" Mar 4 00:45:01.136497 systemd[1]: Started cri-containerd-9c198b238cacf6ba9f360b92535106764387f30cf9ab5e7e86ab7a04240729c5.scope - libcontainer container 9c198b238cacf6ba9f360b92535106764387f30cf9ab5e7e86ab7a04240729c5. Mar 4 00:45:01.434978 containerd[1721]: time="2026-03-04T00:45:01.434939901Z" level=info msg="StartContainer for \"9c198b238cacf6ba9f360b92535106764387f30cf9ab5e7e86ab7a04240729c5\" returns successfully" Mar 4 00:45:01.473745 containerd[1721]: time="2026-03-04T00:45:01.473698592Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:01.476927 containerd[1721]: time="2026-03-04T00:45:01.476891477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 00:45:01.478676 containerd[1721]: time="2026-03-04T00:45:01.478639279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 424.833167ms" Mar 4 00:45:01.478721 containerd[1721]: time="2026-03-04T00:45:01.478682399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 00:45:01.480312 containerd[1721]: time="2026-03-04T00:45:01.480275881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 00:45:01.491588 containerd[1721]: time="2026-03-04T00:45:01.491395896Z" level=info msg="CreateContainer within sandbox \"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 00:45:01.530076 containerd[1721]: time="2026-03-04T00:45:01.530030668Z" level=info msg="CreateContainer within sandbox \"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"279f4fe03f89fdf89e65dc161ce97b838a9207d4593a2f42341553b83dba1f1f\"" Mar 4 00:45:01.531724 containerd[1721]: time="2026-03-04T00:45:01.530708029Z" level=info msg="StartContainer for \"279f4fe03f89fdf89e65dc161ce97b838a9207d4593a2f42341553b83dba1f1f\"" Mar 4 00:45:01.611360 systemd[1]: Started cri-containerd-279f4fe03f89fdf89e65dc161ce97b838a9207d4593a2f42341553b83dba1f1f.scope - libcontainer container 279f4fe03f89fdf89e65dc161ce97b838a9207d4593a2f42341553b83dba1f1f. Mar 4 00:45:01.689837 containerd[1721]: time="2026-03-04T00:45:01.689735481Z" level=info msg="StartContainer for \"279f4fe03f89fdf89e65dc161ce97b838a9207d4593a2f42341553b83dba1f1f\" returns successfully" Mar 4 00:45:02.482863 kubelet[3174]: I0304 00:45:02.481801 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c457fd865-j6sg7" podStartSLOduration=27.653682652 podStartE2EDuration="32.481782579s" podCreationTimestamp="2026-03-04 00:44:30 +0000 UTC" firstStartedPulling="2026-03-04 00:44:56.651286353 +0000 UTC m=+45.710458972" lastFinishedPulling="2026-03-04 00:45:01.47938628 +0000 UTC m=+50.538558899" observedRunningTime="2026-03-04 00:45:02.462167912 +0000 UTC m=+51.521340531" watchObservedRunningTime="2026-03-04 00:45:02.481782579 +0000 UTC m=+51.540955198" Mar 4 00:45:03.447226 kubelet[3174]: I0304 00:45:03.447151 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:45:03.474240 kubelet[3174]: I0304 00:45:03.474168 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c457fd865-bbvcs" podStartSLOduration=28.962840999 podStartE2EDuration="33.474150984s" podCreationTimestamp="2026-03-04 00:44:30 +0000 UTC" firstStartedPulling="2026-03-04 00:44:56.540827365 +0000 UTC m=+45.599999984" lastFinishedPulling="2026-03-04 00:45:01.05213735 +0000 UTC m=+50.111309969" observedRunningTime="2026-03-04 00:45:02.48252318 +0000 UTC m=+51.541695799" watchObservedRunningTime="2026-03-04 00:45:03.474150984 +0000 UTC m=+52.533323563" Mar 4 00:45:05.559041 containerd[1721]: time="2026-03-04T00:45:05.558309722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:05.564040 containerd[1721]: time="2026-03-04T00:45:05.564006153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 4 00:45:05.568969 containerd[1721]: time="2026-03-04T00:45:05.568782065Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:05.573849 containerd[1721]: time="2026-03-04T00:45:05.573819137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:05.574726 containerd[1721]: time="2026-03-04T00:45:05.574611815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.094296454s" Mar 4 00:45:05.574726 containerd[1721]: time="2026-03-04T00:45:05.574646375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 4 00:45:05.576307 containerd[1721]: time="2026-03-04T00:45:05.576273013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 00:45:05.595211 containerd[1721]: time="2026-03-04T00:45:05.595081541Z" level=info msg="CreateContainer within sandbox \"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 00:45:05.631645 containerd[1721]: time="2026-03-04T00:45:05.631500281Z" level=info msg="CreateContainer within sandbox \"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff\"" Mar 4 00:45:05.632977 containerd[1721]: time="2026-03-04T00:45:05.632946878Z" level=info msg="StartContainer for \"121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff\"" Mar 4 00:45:05.661339 systemd[1]: Started cri-containerd-121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff.scope - libcontainer container 121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff. Mar 4 00:45:05.697205 containerd[1721]: time="2026-03-04T00:45:05.697068731Z" level=info msg="StartContainer for \"121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff\" returns successfully" Mar 4 00:45:06.477572 kubelet[3174]: I0304 00:45:06.477143 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54988d8f9b-q276v" podStartSLOduration=24.980821945 podStartE2EDuration="33.476535473s" podCreationTimestamp="2026-03-04 00:44:33 +0000 UTC" firstStartedPulling="2026-03-04 00:44:57.080031565 +0000 UTC m=+46.139204184" lastFinishedPulling="2026-03-04 00:45:05.575745093 +0000 UTC m=+54.634917712" observedRunningTime="2026-03-04 00:45:06.475904634 +0000 UTC m=+55.535077373" watchObservedRunningTime="2026-03-04 00:45:06.476535473 +0000 UTC m=+55.535708172" Mar 4 00:45:07.948688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009590264.mount: Deactivated successfully. Mar 4 00:45:08.301203 containerd[1721]: time="2026-03-04T00:45:08.300852567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:08.303248 containerd[1721]: time="2026-03-04T00:45:08.303206009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 4 00:45:08.306203 containerd[1721]: time="2026-03-04T00:45:08.306147971Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:08.311067 containerd[1721]: time="2026-03-04T00:45:08.310747654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:08.311631 containerd[1721]: time="2026-03-04T00:45:08.311599775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.735284642s" Mar 4 00:45:08.311631 containerd[1721]: time="2026-03-04T00:45:08.311631255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 4 00:45:08.313563 containerd[1721]: time="2026-03-04T00:45:08.313143256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 00:45:08.319877 containerd[1721]: time="2026-03-04T00:45:08.319839380Z" level=info msg="CreateContainer within sandbox \"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 00:45:08.354319 containerd[1721]: time="2026-03-04T00:45:08.354277324Z" level=info msg="CreateContainer within sandbox \"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1f64b931740fcb01946bd4e01e72ff9d1ee647c01874c775ad4c28102e263445\"" Mar 4 00:45:08.355699 containerd[1721]: time="2026-03-04T00:45:08.355666165Z" level=info msg="StartContainer for \"1f64b931740fcb01946bd4e01e72ff9d1ee647c01874c775ad4c28102e263445\"" Mar 4 00:45:08.406372 systemd[1]: Started cri-containerd-1f64b931740fcb01946bd4e01e72ff9d1ee647c01874c775ad4c28102e263445.scope - libcontainer container 1f64b931740fcb01946bd4e01e72ff9d1ee647c01874c775ad4c28102e263445. Mar 4 00:45:08.442519 containerd[1721]: time="2026-03-04T00:45:08.442472306Z" level=info msg="StartContainer for \"1f64b931740fcb01946bd4e01e72ff9d1ee647c01874c775ad4c28102e263445\" returns successfully" Mar 4 00:45:10.717442 containerd[1721]: time="2026-03-04T00:45:10.717392330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:10.720030 containerd[1721]: time="2026-03-04T00:45:10.719861052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 4 00:45:10.724282 containerd[1721]: time="2026-03-04T00:45:10.723933135Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:10.728553 containerd[1721]: time="2026-03-04T00:45:10.728512618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:10.729475 containerd[1721]: time="2026-03-04T00:45:10.729439419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 2.416267923s" Mar 4 00:45:10.729475 containerd[1721]: time="2026-03-04T00:45:10.729475699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 4 00:45:10.731433 containerd[1721]: time="2026-03-04T00:45:10.731392340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 00:45:10.738251 containerd[1721]: time="2026-03-04T00:45:10.738211825Z" level=info msg="CreateContainer within sandbox \"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 00:45:10.779795 containerd[1721]: time="2026-03-04T00:45:10.779680254Z" level=info msg="CreateContainer within sandbox \"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b8c41bd9b4a970576a3d2a269f24ff891a037f9dc71a157df0c3ecef568f0261\"" Mar 4 00:45:10.780645 containerd[1721]: time="2026-03-04T00:45:10.780414254Z" level=info msg="StartContainer for \"b8c41bd9b4a970576a3d2a269f24ff891a037f9dc71a157df0c3ecef568f0261\"" Mar 4 00:45:10.815328 systemd[1]: Started cri-containerd-b8c41bd9b4a970576a3d2a269f24ff891a037f9dc71a157df0c3ecef568f0261.scope - libcontainer container b8c41bd9b4a970576a3d2a269f24ff891a037f9dc71a157df0c3ecef568f0261. Mar 4 00:45:10.851500 containerd[1721]: time="2026-03-04T00:45:10.851459264Z" level=info msg="StartContainer for \"b8c41bd9b4a970576a3d2a269f24ff891a037f9dc71a157df0c3ecef568f0261\" returns successfully" Mar 4 00:45:11.080155 containerd[1721]: time="2026-03-04T00:45:11.079592062Z" level=info msg="StopPodSandbox for \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\"" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.131 [WARNING][5741] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a012406e-5d5d-4be6-a07e-5f6094f9e248", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3", Pod:"coredns-674b8bbfcf-5s27l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic098b2d6db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.131 [INFO][5741] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.131 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" iface="eth0" netns="" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.131 [INFO][5741] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.131 [INFO][5741] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.157 [INFO][5748] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.157 [INFO][5748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.157 [INFO][5748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.167 [WARNING][5748] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.168 [INFO][5748] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.170 [INFO][5748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.174359 containerd[1721]: 2026-03-04 00:45:11.172 [INFO][5741] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.174359 containerd[1721]: time="2026-03-04T00:45:11.174230488Z" level=info msg="TearDown network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" successfully" Mar 4 00:45:11.174359 containerd[1721]: time="2026-03-04T00:45:11.174256648Z" level=info msg="StopPodSandbox for \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" returns successfully" Mar 4 00:45:11.176153 containerd[1721]: time="2026-03-04T00:45:11.175771529Z" level=info msg="RemovePodSandbox for \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\"" Mar 4 00:45:11.178238 containerd[1721]: time="2026-03-04T00:45:11.178202611Z" level=info msg="Forcibly stopping sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\"" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.215 [WARNING][5763] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a012406e-5d5d-4be6-a07e-5f6094f9e248", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5537cbe9cf813f797e1da2d91bc212148f8f985182bb7b6b06374448d5626ec3", Pod:"coredns-674b8bbfcf-5s27l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic098b2d6db9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.215 [INFO][5763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.215 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" iface="eth0" netns="" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.215 [INFO][5763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.215 [INFO][5763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.236 [INFO][5770] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.237 [INFO][5770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.237 [INFO][5770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.245 [WARNING][5770] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.245 [INFO][5770] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" HandleID="k8s-pod-network.f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--5s27l-eth0" Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.247 [INFO][5770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.250836 containerd[1721]: 2026-03-04 00:45:11.249 [INFO][5763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a" Mar 4 00:45:11.251799 containerd[1721]: time="2026-03-04T00:45:11.250974262Z" level=info msg="TearDown network for sandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" successfully" Mar 4 00:45:11.267612 containerd[1721]: time="2026-03-04T00:45:11.267256193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:11.267612 containerd[1721]: time="2026-03-04T00:45:11.267457593Z" level=info msg="RemovePodSandbox \"f9ad75ca05854d96886e63e06068f37f7833bb154ec7a5ec93acb03d7483965a\" returns successfully" Mar 4 00:45:11.268027 containerd[1721]: time="2026-03-04T00:45:11.267998314Z" level=info msg="StopPodSandbox for \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\"" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.302 [WARNING][5784] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"0633db9d-7cfc-4660-aa53-674582de4b80", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2", Pod:"calico-apiserver-6c457fd865-j6sg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali707db5dd15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.302 [INFO][5784] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.302 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" iface="eth0" netns="" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.302 [INFO][5784] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.302 [INFO][5784] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.322 [INFO][5791] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.322 [INFO][5791] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.322 [INFO][5791] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.331 [WARNING][5791] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.331 [INFO][5791] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.333 [INFO][5791] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.336741 containerd[1721]: 2026-03-04 00:45:11.335 [INFO][5784] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.337340 containerd[1721]: time="2026-03-04T00:45:11.337204442Z" level=info msg="TearDown network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" successfully" Mar 4 00:45:11.337340 containerd[1721]: time="2026-03-04T00:45:11.337239082Z" level=info msg="StopPodSandbox for \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" returns successfully" Mar 4 00:45:11.338534 containerd[1721]: time="2026-03-04T00:45:11.338297043Z" level=info msg="RemovePodSandbox for \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\"" Mar 4 00:45:11.338534 containerd[1721]: time="2026-03-04T00:45:11.338327363Z" level=info msg="Forcibly stopping sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\"" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.401 [WARNING][5805] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"0633db9d-7cfc-4660-aa53-674582de4b80", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"9cc7192bf778565487b3d47e9564c8935926aa08b732799cb437866a054168d2", Pod:"calico-apiserver-6c457fd865-j6sg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali707db5dd15b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.401 [INFO][5805] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.401 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" iface="eth0" netns="" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.401 [INFO][5805] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.401 [INFO][5805] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.422 [INFO][5815] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.422 [INFO][5815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.422 [INFO][5815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.431 [WARNING][5815] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.431 [INFO][5815] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" HandleID="k8s-pod-network.a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--j6sg7-eth0" Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.434 [INFO][5815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.437714 containerd[1721]: 2026-03-04 00:45:11.436 [INFO][5805] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4" Mar 4 00:45:11.438231 containerd[1721]: time="2026-03-04T00:45:11.437759272Z" level=info msg="TearDown network for sandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" successfully" Mar 4 00:45:11.446143 containerd[1721]: time="2026-03-04T00:45:11.446086958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:11.446228 containerd[1721]: time="2026-03-04T00:45:11.446181118Z" level=info msg="RemovePodSandbox \"a4b6e1c3295a5a30e174e9679399e362cb7adf15a8462ec0db90b3a6d581e7a4\" returns successfully" Mar 4 00:45:11.446857 containerd[1721]: time="2026-03-04T00:45:11.446622998Z" level=info msg="StopPodSandbox for \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\"" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.478 [WARNING][5830] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a41f2112-b709-42a8-b7ef-8a376aba0a93", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d", Pod:"goldmane-5b85766d88-mnq4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c3d2113b49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.479 [INFO][5830] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.479 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" iface="eth0" netns="" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.479 [INFO][5830] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.479 [INFO][5830] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.498 [INFO][5838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.498 [INFO][5838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.498 [INFO][5838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.507 [WARNING][5838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.507 [INFO][5838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.508 [INFO][5838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.514002 containerd[1721]: 2026-03-04 00:45:11.511 [INFO][5830] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.514570 containerd[1721]: time="2026-03-04T00:45:11.514070485Z" level=info msg="TearDown network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" successfully" Mar 4 00:45:11.514570 containerd[1721]: time="2026-03-04T00:45:11.514096685Z" level=info msg="StopPodSandbox for \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" returns successfully" Mar 4 00:45:11.515310 containerd[1721]: time="2026-03-04T00:45:11.514946846Z" level=info msg="RemovePodSandbox for \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\"" Mar 4 00:45:11.515310 containerd[1721]: time="2026-03-04T00:45:11.514977246Z" level=info msg="Forcibly stopping sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\"" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.553 [WARNING][5853] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a41f2112-b709-42a8-b7ef-8a376aba0a93", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"a98d4c1e50861e190724f61dd9f55df0ec9c00c983e04f4ce54af72f29428a9d", Pod:"goldmane-5b85766d88-mnq4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3c3d2113b49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.554 [INFO][5853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.554 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" iface="eth0" netns="" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.554 [INFO][5853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.554 [INFO][5853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.574 [INFO][5860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.574 [INFO][5860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.575 [INFO][5860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.584 [WARNING][5860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.584 [INFO][5860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" HandleID="k8s-pod-network.2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Workload="ci--4081.3.6--n--d3c3414975-k8s-goldmane--5b85766d88--mnq4v-eth0" Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.585 [INFO][5860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.589145 containerd[1721]: 2026-03-04 00:45:11.587 [INFO][5853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063" Mar 4 00:45:11.589145 containerd[1721]: time="2026-03-04T00:45:11.588826137Z" level=info msg="TearDown network for sandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" successfully" Mar 4 00:45:11.596908 containerd[1721]: time="2026-03-04T00:45:11.596858903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:11.597132 containerd[1721]: time="2026-03-04T00:45:11.596933023Z" level=info msg="RemovePodSandbox \"2083772682e613b2878b2772dd1cdd109b9ce888ead3f5fc94d7858229013063\" returns successfully" Mar 4 00:45:11.597752 containerd[1721]: time="2026-03-04T00:45:11.597486503Z" level=info msg="StopPodSandbox for \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\"" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.634 [WARNING][5874] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"892fe9de-db99-487d-bc78-cf3eb17d4d4b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e", Pod:"coredns-674b8bbfcf-rjwpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e35d12aa82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.634 [INFO][5874] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.634 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" iface="eth0" netns="" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.634 [INFO][5874] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.634 [INFO][5874] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.655 [INFO][5882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.655 [INFO][5882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.655 [INFO][5882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.666 [WARNING][5882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.666 [INFO][5882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.668 [INFO][5882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.671785 containerd[1721]: 2026-03-04 00:45:11.670 [INFO][5874] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.672382 containerd[1721]: time="2026-03-04T00:45:11.671970155Z" level=info msg="TearDown network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" successfully" Mar 4 00:45:11.672382 containerd[1721]: time="2026-03-04T00:45:11.671999515Z" level=info msg="StopPodSandbox for \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" returns successfully" Mar 4 00:45:11.672677 containerd[1721]: time="2026-03-04T00:45:11.672650515Z" level=info msg="RemovePodSandbox for \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\"" Mar 4 00:45:11.672728 containerd[1721]: time="2026-03-04T00:45:11.672683675Z" level=info msg="Forcibly stopping sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\"" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.705 [WARNING][5897] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"892fe9de-db99-487d-bc78-cf3eb17d4d4b", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"99400576484d7ef04dd8b7d3b2841631d7b95d1ea23a39ba149451c00926e67e", Pod:"coredns-674b8bbfcf-rjwpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e35d12aa82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.706 [INFO][5897] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.706 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" iface="eth0" netns="" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.706 [INFO][5897] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.706 [INFO][5897] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.726 [INFO][5904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.726 [INFO][5904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.726 [INFO][5904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.735 [WARNING][5904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.735 [INFO][5904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" HandleID="k8s-pod-network.eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Workload="ci--4081.3.6--n--d3c3414975-k8s-coredns--674b8bbfcf--rjwpl-eth0" Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.737 [INFO][5904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.741346 containerd[1721]: 2026-03-04 00:45:11.739 [INFO][5897] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978" Mar 4 00:45:11.742163 containerd[1721]: time="2026-03-04T00:45:11.741389043Z" level=info msg="TearDown network for sandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" successfully" Mar 4 00:45:11.785134 containerd[1721]: time="2026-03-04T00:45:11.785067074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:11.785271 containerd[1721]: time="2026-03-04T00:45:11.785157594Z" level=info msg="RemovePodSandbox \"eba02a2c5eedce146d1ca9ea41fbcc82c3b84790e604c9fcaaf554a8fe686978\" returns successfully" Mar 4 00:45:11.785751 containerd[1721]: time="2026-03-04T00:45:11.785720354Z" level=info msg="StopPodSandbox for \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\"" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.821 [WARNING][5919] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"619b7aec-3809-4bd9-91c1-1f701fc98910", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9", Pod:"calico-apiserver-6c457fd865-bbvcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie71069e8008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.821 [INFO][5919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.821 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" iface="eth0" netns="" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.821 [INFO][5919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.821 [INFO][5919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.839 [INFO][5927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.839 [INFO][5927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.840 [INFO][5927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.849 [WARNING][5927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.849 [INFO][5927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.852 [INFO][5927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.856894 containerd[1721]: 2026-03-04 00:45:11.854 [INFO][5919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.858203 containerd[1721]: time="2026-03-04T00:45:11.858067925Z" level=info msg="TearDown network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" successfully" Mar 4 00:45:11.858203 containerd[1721]: time="2026-03-04T00:45:11.858201765Z" level=info msg="StopPodSandbox for \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" returns successfully" Mar 4 00:45:11.858815 containerd[1721]: time="2026-03-04T00:45:11.858789445Z" level=info msg="RemovePodSandbox for \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\"" Mar 4 00:45:11.858878 containerd[1721]: time="2026-03-04T00:45:11.858819685Z" level=info msg="Forcibly stopping sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\"" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.890 [WARNING][5941] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0", GenerateName:"calico-apiserver-6c457fd865-", Namespace:"calico-system", SelfLink:"", UID:"619b7aec-3809-4bd9-91c1-1f701fc98910", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c457fd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"4b54a0e00f9c841fcb9069b01d4fabcd13ac7e1ee5cc25a1d75cf86c9cafaba9", Pod:"calico-apiserver-6c457fd865-bbvcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie71069e8008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.891 [INFO][5941] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.891 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" iface="eth0" netns="" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.891 [INFO][5941] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.891 [INFO][5941] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.911 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.911 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.911 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.919 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.920 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" HandleID="k8s-pod-network.2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--apiserver--6c457fd865--bbvcs-eth0" Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.921 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:11.925211 containerd[1721]: 2026-03-04 00:45:11.923 [INFO][5941] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f" Mar 4 00:45:11.925692 containerd[1721]: time="2026-03-04T00:45:11.925236331Z" level=info msg="TearDown network for sandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" successfully" Mar 4 00:45:11.932880 containerd[1721]: time="2026-03-04T00:45:11.932834417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:11.932997 containerd[1721]: time="2026-03-04T00:45:11.932919337Z" level=info msg="RemovePodSandbox \"2a849c1c4983ca478f5c9e7ec6602dcc1a401acd41a4cd885173eb28e9b5904f\" returns successfully" Mar 4 00:45:11.933691 containerd[1721]: time="2026-03-04T00:45:11.933423177Z" level=info msg="StopPodSandbox for \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\"" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.966 [WARNING][5962] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.967 [INFO][5962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.967 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" iface="eth0" netns="" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.967 [INFO][5962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.967 [INFO][5962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.986 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.986 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.986 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.995 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.995 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:11.997 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:12.001518 containerd[1721]: 2026-03-04 00:45:12.000 [INFO][5962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.002090 containerd[1721]: time="2026-03-04T00:45:12.001980185Z" level=info msg="TearDown network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" successfully" Mar 4 00:45:12.002090 containerd[1721]: time="2026-03-04T00:45:12.002009345Z" level=info msg="StopPodSandbox for \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" returns successfully" Mar 4 00:45:12.002804 containerd[1721]: time="2026-03-04T00:45:12.002534345Z" level=info msg="RemovePodSandbox for \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\"" Mar 4 00:45:12.002804 containerd[1721]: time="2026-03-04T00:45:12.002563425Z" level=info msg="Forcibly stopping sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\"" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.035 [WARNING][5983] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" WorkloadEndpoint="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.035 [INFO][5983] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.035 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" iface="eth0" netns="" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.035 [INFO][5983] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.035 [INFO][5983] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.056 [INFO][5990] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.057 [INFO][5990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.057 [INFO][5990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.065 [WARNING][5990] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.065 [INFO][5990] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" HandleID="k8s-pod-network.71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Workload="ci--4081.3.6--n--d3c3414975-k8s-whisker--bfbbd99b4--flhbl-eth0" Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.067 [INFO][5990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:12.070864 containerd[1721]: 2026-03-04 00:45:12.069 [INFO][5983] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee" Mar 4 00:45:12.071806 containerd[1721]: time="2026-03-04T00:45:12.071300793Z" level=info msg="TearDown network for sandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" successfully" Mar 4 00:45:12.080381 containerd[1721]: time="2026-03-04T00:45:12.080318679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:12.080628 containerd[1721]: time="2026-03-04T00:45:12.080391439Z" level=info msg="RemovePodSandbox \"71b12a086c4fd34bd5a74ba6635d03190e4cfe28cbf985a4a6647fb4fd15c0ee\" returns successfully" Mar 4 00:45:12.080964 containerd[1721]: time="2026-03-04T00:45:12.080942280Z" level=info msg="StopPodSandbox for \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\"" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.121 [WARNING][6004] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0", GenerateName:"calico-kube-controllers-54988d8f9b-", Namespace:"calico-system", SelfLink:"", UID:"1026cf72-bf90-4cde-98d0-93ad03abe950", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54988d8f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc", Pod:"calico-kube-controllers-54988d8f9b-q276v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali345d6e7df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.121 [INFO][6004] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.121 [INFO][6004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" iface="eth0" netns="" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.121 [INFO][6004] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.121 [INFO][6004] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.147 [INFO][6012] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.147 [INFO][6012] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.147 [INFO][6012] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.156 [WARNING][6012] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.156 [INFO][6012] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.158 [INFO][6012] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:12.162665 containerd[1721]: 2026-03-04 00:45:12.160 [INFO][6004] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.163661 containerd[1721]: time="2026-03-04T00:45:12.162779497Z" level=info msg="TearDown network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" successfully" Mar 4 00:45:12.163661 containerd[1721]: time="2026-03-04T00:45:12.162851097Z" level=info msg="StopPodSandbox for \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" returns successfully" Mar 4 00:45:12.163661 containerd[1721]: time="2026-03-04T00:45:12.163502457Z" level=info msg="RemovePodSandbox for \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\"" Mar 4 00:45:12.163661 containerd[1721]: time="2026-03-04T00:45:12.163530697Z" level=info msg="Forcibly stopping sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\"" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.203 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0", GenerateName:"calico-kube-controllers-54988d8f9b-", Namespace:"calico-system", SelfLink:"", UID:"1026cf72-bf90-4cde-98d0-93ad03abe950", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 0, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54988d8f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d3c3414975", ContainerID:"5c6e317aa749b5a87c6ce5fd2abf4a0eb7b37e2c744a0ec34cb8ec6d10ef6acc", Pod:"calico-kube-controllers-54988d8f9b-q276v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali345d6e7df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.203 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.203 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" iface="eth0" netns="" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.203 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.203 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.224 [INFO][6034] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.224 [INFO][6034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.224 [INFO][6034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.233 [WARNING][6034] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.233 [INFO][6034] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" HandleID="k8s-pod-network.e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Workload="ci--4081.3.6--n--d3c3414975-k8s-calico--kube--controllers--54988d8f9b--q276v-eth0" Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.235 [INFO][6034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 00:45:12.238720 containerd[1721]: 2026-03-04 00:45:12.237 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1" Mar 4 00:45:12.239196 containerd[1721]: time="2026-03-04T00:45:12.238730910Z" level=info msg="TearDown network for sandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" successfully" Mar 4 00:45:12.269584 containerd[1721]: time="2026-03-04T00:45:12.269537891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 00:45:12.269979 containerd[1721]: time="2026-03-04T00:45:12.269952691Z" level=info msg="RemovePodSandbox \"e45b15a31e952e096577c96bd8f473a849197cefc3d5f38678acfb2ba01e2ef1\" returns successfully" Mar 4 00:45:12.411433 containerd[1721]: time="2026-03-04T00:45:12.411382990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:12.414216 containerd[1721]: time="2026-03-04T00:45:12.413920592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 4 00:45:12.417362 containerd[1721]: time="2026-03-04T00:45:12.417320874Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:12.422175 containerd[1721]: time="2026-03-04T00:45:12.422112797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:12.422999 containerd[1721]: time="2026-03-04T00:45:12.422838718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.691402378s" Mar 4 00:45:12.422999 containerd[1721]: time="2026-03-04T00:45:12.422874798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 4 00:45:12.424312 containerd[1721]: time="2026-03-04T00:45:12.424277599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 00:45:12.432878 containerd[1721]: time="2026-03-04T00:45:12.432838125Z" level=info msg="CreateContainer within sandbox \"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 00:45:12.466613 containerd[1721]: time="2026-03-04T00:45:12.466569348Z" level=info msg="CreateContainer within sandbox \"159a3943d717ce8ba8ef0e6f43b5f649e3b3f05d94b77f6d5c38c2390abee91a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ba5bd8f853762aa6aa5e7b413917a3e0f353aecf946f36a1332a04e66fb5b090\"" Mar 4 00:45:12.467434 containerd[1721]: time="2026-03-04T00:45:12.467405669Z" level=info msg="StartContainer for \"ba5bd8f853762aa6aa5e7b413917a3e0f353aecf946f36a1332a04e66fb5b090\"" Mar 4 00:45:12.513280 systemd[1]: Started cri-containerd-ba5bd8f853762aa6aa5e7b413917a3e0f353aecf946f36a1332a04e66fb5b090.scope - libcontainer container ba5bd8f853762aa6aa5e7b413917a3e0f353aecf946f36a1332a04e66fb5b090. Mar 4 00:45:13.158002 kubelet[3174]: I0304 00:45:13.157594 3174 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 00:45:13.158002 kubelet[3174]: I0304 00:45:13.157639 3174 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 00:45:13.381686 containerd[1721]: time="2026-03-04T00:45:13.381598899Z" level=info msg="StartContainer for \"ba5bd8f853762aa6aa5e7b413917a3e0f353aecf946f36a1332a04e66fb5b090\" returns successfully" Mar 4 00:45:13.528894 kubelet[3174]: I0304 00:45:13.528386 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-mnq4v" podStartSLOduration=31.296352981 podStartE2EDuration="42.52837159s" podCreationTimestamp="2026-03-04 00:44:31 +0000 UTC" firstStartedPulling="2026-03-04 00:44:57.080979327 +0000 UTC m=+46.140151946" lastFinishedPulling="2026-03-04 00:45:08.312997936 +0000 UTC m=+57.372170555" observedRunningTime="2026-03-04 00:45:08.485117216 +0000 UTC m=+57.544289875" watchObservedRunningTime="2026-03-04 00:45:13.52837159 +0000 UTC m=+62.587544209" Mar 4 00:45:13.528894 kubelet[3174]: I0304 00:45:13.528641 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgcgd" podStartSLOduration=23.877383432 podStartE2EDuration="40.52863547s" podCreationTimestamp="2026-03-04 00:44:33 +0000 UTC" firstStartedPulling="2026-03-04 00:44:55.772635161 +0000 UTC m=+44.831807780" lastFinishedPulling="2026-03-04 00:45:12.423887199 +0000 UTC m=+61.483059818" observedRunningTime="2026-03-04 00:45:13.527840149 +0000 UTC m=+62.587012768" watchObservedRunningTime="2026-03-04 00:45:13.52863547 +0000 UTC m=+62.587808089" Mar 4 00:45:15.004046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568807466.mount: Deactivated successfully. Mar 4 00:45:15.042943 containerd[1721]: time="2026-03-04T00:45:15.042175138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:15.044967 containerd[1721]: time="2026-03-04T00:45:15.044935779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 4 00:45:15.048499 containerd[1721]: time="2026-03-04T00:45:15.048450419Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:15.053539 containerd[1721]: time="2026-03-04T00:45:15.053484179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 00:45:15.055210 containerd[1721]: time="2026-03-04T00:45:15.054468059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.63015682s" Mar 4 00:45:15.055210 containerd[1721]: time="2026-03-04T00:45:15.054503259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 4 00:45:15.063563 containerd[1721]: time="2026-03-04T00:45:15.063354180Z" level=info msg="CreateContainer within sandbox \"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 00:45:15.101743 containerd[1721]: time="2026-03-04T00:45:15.101700743Z" level=info msg="CreateContainer within sandbox \"32eb9a9dbd99d0aa1ae57d2c6f8155915bbb51334b78353faba46dbb66aa8375\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24c55ba02dd84b65b8f94c95a34305a610391dd2e2d2cfc81916870c9ce53b3f\"" Mar 4 00:45:15.104039 containerd[1721]: time="2026-03-04T00:45:15.102703143Z" level=info msg="StartContainer for \"24c55ba02dd84b65b8f94c95a34305a610391dd2e2d2cfc81916870c9ce53b3f\"" Mar 4 00:45:15.140934 systemd[1]: Started cri-containerd-24c55ba02dd84b65b8f94c95a34305a610391dd2e2d2cfc81916870c9ce53b3f.scope - libcontainer container 24c55ba02dd84b65b8f94c95a34305a610391dd2e2d2cfc81916870c9ce53b3f. Mar 4 00:45:15.179602 containerd[1721]: time="2026-03-04T00:45:15.179553028Z" level=info msg="StartContainer for \"24c55ba02dd84b65b8f94c95a34305a610391dd2e2d2cfc81916870c9ce53b3f\" returns successfully" Mar 4 00:45:15.540615 kubelet[3174]: I0304 00:45:15.540548 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58c5f4f849-n9str" podStartSLOduration=2.035028908 podStartE2EDuration="19.540310694s" podCreationTimestamp="2026-03-04 00:44:56 +0000 UTC" firstStartedPulling="2026-03-04 00:44:57.550115953 +0000 UTC m=+46.609288572" lastFinishedPulling="2026-03-04 00:45:15.055397739 +0000 UTC m=+64.114570358" observedRunningTime="2026-03-04 00:45:15.538575774 +0000 UTC m=+64.597748633" watchObservedRunningTime="2026-03-04 00:45:15.540310694 +0000 UTC m=+64.599483313" Mar 4 00:45:15.825193 systemd[1]: run-containerd-runc-k8s.io-121919c2b2f931367da3cb82718d47072eac2486f4f869d13812b817351191ff-runc.6LzzZd.mount: Deactivated successfully. Mar 4 00:45:46.726428 kubelet[3174]: I0304 00:45:46.726057 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 00:45:57.601092 systemd[1]: run-containerd-runc-k8s.io-acd1deeb77a6947a381dd1961795a0a8a6193c574acc1554bf5a075462b07471-runc.YfnoRg.mount: Deactivated successfully. Mar 4 00:46:04.076388 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:32878.service - OpenSSH per-connection server daemon (10.200.16.10:32878). Mar 4 00:46:04.573139 sshd[6299]: Accepted publickey for core from 10.200.16.10 port 32878 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:04.574705 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:04.579256 systemd-logind[1698]: New session 10 of user core. Mar 4 00:46:04.583257 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 00:46:05.000998 sshd[6299]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:05.004748 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Mar 4 00:46:05.005530 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:32878.service: Deactivated successfully. Mar 4 00:46:05.008705 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 00:46:05.009331 systemd-logind[1698]: Removed session 10. Mar 4 00:46:10.089915 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:40044.service - OpenSSH per-connection server daemon (10.200.16.10:40044). Mar 4 00:46:10.590134 sshd[6343]: Accepted publickey for core from 10.200.16.10 port 40044 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:10.591021 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:10.595305 systemd-logind[1698]: New session 11 of user core. Mar 4 00:46:10.607304 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 00:46:11.008660 sshd[6343]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:11.012408 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:40044.service: Deactivated successfully. Mar 4 00:46:11.015907 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 00:46:11.016908 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Mar 4 00:46:11.017851 systemd-logind[1698]: Removed session 11. Mar 4 00:46:16.097848 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:40050.service - OpenSSH per-connection server daemon (10.200.16.10:40050). Mar 4 00:46:16.590140 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 40050 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:16.591147 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:16.595064 systemd-logind[1698]: New session 12 of user core. Mar 4 00:46:16.602251 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 00:46:17.001691 sshd[6397]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:17.005955 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Mar 4 00:46:17.006677 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:40050.service: Deactivated successfully. Mar 4 00:46:17.008892 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 00:46:17.010415 systemd-logind[1698]: Removed session 12. Mar 4 00:46:22.091482 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:37748.service - OpenSSH per-connection server daemon (10.200.16.10:37748). Mar 4 00:46:22.601897 sshd[6450]: Accepted publickey for core from 10.200.16.10 port 37748 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:22.603851 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:22.608832 systemd-logind[1698]: New session 13 of user core. Mar 4 00:46:22.613266 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 00:46:23.015301 sshd[6450]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:23.018845 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Mar 4 00:46:23.019372 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:37748.service: Deactivated successfully. Mar 4 00:46:23.022332 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 00:46:23.024270 systemd-logind[1698]: Removed session 13. Mar 4 00:46:28.111427 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:37758.service - OpenSSH per-connection server daemon (10.200.16.10:37758). Mar 4 00:46:28.600879 sshd[6495]: Accepted publickey for core from 10.200.16.10 port 37758 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:28.633340 sshd[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:28.637396 systemd-logind[1698]: New session 14 of user core. Mar 4 00:46:28.643263 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 00:46:29.019508 sshd[6495]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:29.025196 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:37758.service: Deactivated successfully. Mar 4 00:46:29.027624 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 00:46:29.029636 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Mar 4 00:46:29.031387 systemd-logind[1698]: Removed session 14. Mar 4 00:46:34.112393 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:59778.service - OpenSSH per-connection server daemon (10.200.16.10:59778). Mar 4 00:46:34.609142 sshd[6531]: Accepted publickey for core from 10.200.16.10 port 59778 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:34.610145 sshd[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:34.613981 systemd-logind[1698]: New session 15 of user core. Mar 4 00:46:34.619259 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 00:46:35.026341 sshd[6531]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:35.029977 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:59778.service: Deactivated successfully. Mar 4 00:46:35.032583 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 00:46:35.033666 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Mar 4 00:46:35.034565 systemd-logind[1698]: Removed session 15. Mar 4 00:46:40.115043 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:37828.service - OpenSSH per-connection server daemon (10.200.16.10:37828). Mar 4 00:46:40.605609 sshd[6564]: Accepted publickey for core from 10.200.16.10 port 37828 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:40.606524 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:40.610756 systemd-logind[1698]: New session 16 of user core. Mar 4 00:46:40.616252 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 00:46:41.026132 sshd[6564]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:41.030539 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:37828.service: Deactivated successfully. Mar 4 00:46:41.033990 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 00:46:41.034976 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Mar 4 00:46:41.036197 systemd-logind[1698]: Removed session 16. Mar 4 00:46:46.113018 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:37834.service - OpenSSH per-connection server daemon (10.200.16.10:37834). Mar 4 00:46:46.605016 sshd[6617]: Accepted publickey for core from 10.200.16.10 port 37834 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:46.605861 sshd[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:46.610256 systemd-logind[1698]: New session 17 of user core. Mar 4 00:46:46.614256 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 00:46:47.018081 sshd[6617]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:47.023138 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:37834.service: Deactivated successfully. Mar 4 00:46:47.025556 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 00:46:47.026627 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Mar 4 00:46:47.028081 systemd-logind[1698]: Removed session 17. Mar 4 00:46:47.112314 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:37836.service - OpenSSH per-connection server daemon (10.200.16.10:37836). Mar 4 00:46:47.600799 sshd[6631]: Accepted publickey for core from 10.200.16.10 port 37836 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:47.601354 sshd[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:47.605279 systemd-logind[1698]: New session 18 of user core. Mar 4 00:46:47.613256 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 00:46:48.075498 sshd[6631]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:48.080403 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:37836.service: Deactivated successfully. Mar 4 00:46:48.084030 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 00:46:48.085029 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Mar 4 00:46:48.087306 systemd-logind[1698]: Removed session 18. Mar 4 00:46:48.166403 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:37852.service - OpenSSH per-connection server daemon (10.200.16.10:37852). Mar 4 00:46:48.653139 sshd[6642]: Accepted publickey for core from 10.200.16.10 port 37852 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:48.654516 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:48.658998 systemd-logind[1698]: New session 19 of user core. Mar 4 00:46:48.667297 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 00:46:49.065713 sshd[6642]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:49.069927 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:37852.service: Deactivated successfully. Mar 4 00:46:49.071817 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 00:46:49.072697 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Mar 4 00:46:49.073885 systemd-logind[1698]: Removed session 19. Mar 4 00:46:54.157457 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:39208.service - OpenSSH per-connection server daemon (10.200.16.10:39208). Mar 4 00:46:54.645482 sshd[6657]: Accepted publickey for core from 10.200.16.10 port 39208 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:46:54.647389 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:46:54.651763 systemd-logind[1698]: New session 20 of user core. Mar 4 00:46:54.662273 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 00:46:55.059116 sshd[6657]: pam_unix(sshd:session): session closed for user core Mar 4 00:46:55.063443 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Mar 4 00:46:55.064462 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:39208.service: Deactivated successfully. Mar 4 00:46:55.066840 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 00:46:55.068033 systemd-logind[1698]: Removed session 20. Mar 4 00:47:00.154689 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:39450.service - OpenSSH per-connection server daemon (10.200.16.10:39450). Mar 4 00:47:00.641352 sshd[6691]: Accepted publickey for core from 10.200.16.10 port 39450 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:00.694395 sshd[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:00.698096 systemd-logind[1698]: New session 21 of user core. Mar 4 00:47:00.703263 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 00:47:01.056344 sshd[6691]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:01.060754 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:39450.service: Deactivated successfully. Mar 4 00:47:01.063160 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 00:47:01.065821 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Mar 4 00:47:01.066730 systemd-logind[1698]: Removed session 21. Mar 4 00:47:01.137639 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:39454.service - OpenSSH per-connection server daemon (10.200.16.10:39454). Mar 4 00:47:01.594686 sshd[6704]: Accepted publickey for core from 10.200.16.10 port 39454 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:01.596145 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:01.600306 systemd-logind[1698]: New session 22 of user core. Mar 4 00:47:01.605238 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 00:47:02.126963 sshd[6704]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:02.130834 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:39454.service: Deactivated successfully. Mar 4 00:47:02.132886 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 00:47:02.134830 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Mar 4 00:47:02.135912 systemd-logind[1698]: Removed session 22. Mar 4 00:47:02.218385 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:39458.service - OpenSSH per-connection server daemon (10.200.16.10:39458). Mar 4 00:47:02.707519 sshd[6714]: Accepted publickey for core from 10.200.16.10 port 39458 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:02.708879 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:02.713407 systemd-logind[1698]: New session 23 of user core. Mar 4 00:47:02.719329 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 00:47:03.587462 sshd[6714]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:03.591567 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Mar 4 00:47:03.591762 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:39458.service: Deactivated successfully. Mar 4 00:47:03.595849 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 00:47:03.597598 systemd-logind[1698]: Removed session 23. Mar 4 00:47:03.672822 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:39474.service - OpenSSH per-connection server daemon (10.200.16.10:39474). Mar 4 00:47:04.168675 sshd[6747]: Accepted publickey for core from 10.200.16.10 port 39474 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:04.169875 sshd[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:04.174247 systemd-logind[1698]: New session 24 of user core. Mar 4 00:47:04.178254 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 00:47:04.725089 sshd[6747]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:04.730155 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:39474.service: Deactivated successfully. Mar 4 00:47:04.731996 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 00:47:04.732728 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Mar 4 00:47:04.733727 systemd-logind[1698]: Removed session 24. Mar 4 00:47:04.810808 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:39480.service - OpenSSH per-connection server daemon (10.200.16.10:39480). Mar 4 00:47:05.301310 sshd[6758]: Accepted publickey for core from 10.200.16.10 port 39480 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:05.302782 sshd[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:05.306809 systemd-logind[1698]: New session 25 of user core. Mar 4 00:47:05.313295 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 00:47:05.713319 sshd[6758]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:05.717058 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Mar 4 00:47:05.717294 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:39480.service: Deactivated successfully. Mar 4 00:47:05.718995 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 00:47:05.723168 systemd-logind[1698]: Removed session 25. Mar 4 00:47:10.801491 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:38444.service - OpenSSH per-connection server daemon (10.200.16.10:38444). Mar 4 00:47:11.296439 sshd[6816]: Accepted publickey for core from 10.200.16.10 port 38444 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:11.315541 sshd[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:11.319714 systemd-logind[1698]: New session 26 of user core. Mar 4 00:47:11.327283 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 00:47:11.706942 sshd[6816]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:11.710945 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:38444.service: Deactivated successfully. Mar 4 00:47:11.713844 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 00:47:11.714776 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Mar 4 00:47:11.716042 systemd-logind[1698]: Removed session 26. Mar 4 00:47:16.801760 systemd[1]: Started sshd@24-10.200.20.21:22-10.200.16.10:38458.service - OpenSSH per-connection server daemon (10.200.16.10:38458). Mar 4 00:47:17.286898 sshd[6851]: Accepted publickey for core from 10.200.16.10 port 38458 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:17.288637 sshd[6851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:17.294122 systemd-logind[1698]: New session 27 of user core. Mar 4 00:47:17.301355 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 00:47:17.698142 sshd[6851]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:17.702166 systemd[1]: sshd@24-10.200.20.21:22-10.200.16.10:38458.service: Deactivated successfully. Mar 4 00:47:17.704708 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 00:47:17.705998 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Mar 4 00:47:17.706907 systemd-logind[1698]: Removed session 27. Mar 4 00:47:22.796681 systemd[1]: Started sshd@25-10.200.20.21:22-10.200.16.10:43618.service - OpenSSH per-connection server daemon (10.200.16.10:43618). Mar 4 00:47:23.284137 sshd[6886]: Accepted publickey for core from 10.200.16.10 port 43618 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:23.285076 sshd[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:23.289598 systemd-logind[1698]: New session 28 of user core. Mar 4 00:47:23.297260 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 4 00:47:23.694651 sshd[6886]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:23.697947 systemd-logind[1698]: Session 28 logged out. Waiting for processes to exit. Mar 4 00:47:23.698217 systemd[1]: sshd@25-10.200.20.21:22-10.200.16.10:43618.service: Deactivated successfully. Mar 4 00:47:23.700023 systemd[1]: session-28.scope: Deactivated successfully. Mar 4 00:47:23.701922 systemd-logind[1698]: Removed session 28. Mar 4 00:47:28.786406 systemd[1]: Started sshd@26-10.200.20.21:22-10.200.16.10:43626.service - OpenSSH per-connection server daemon (10.200.16.10:43626). Mar 4 00:47:29.273979 sshd[6920]: Accepted publickey for core from 10.200.16.10 port 43626 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:29.275407 sshd[6920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:29.281602 systemd-logind[1698]: New session 29 of user core. Mar 4 00:47:29.286280 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 4 00:47:29.677536 sshd[6920]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:29.681540 systemd[1]: sshd@26-10.200.20.21:22-10.200.16.10:43626.service: Deactivated successfully. Mar 4 00:47:29.683559 systemd[1]: session-29.scope: Deactivated successfully. Mar 4 00:47:29.685919 systemd-logind[1698]: Session 29 logged out. Waiting for processes to exit. Mar 4 00:47:29.687261 systemd-logind[1698]: Removed session 29. Mar 4 00:47:34.765741 systemd[1]: Started sshd@27-10.200.20.21:22-10.200.16.10:58208.service - OpenSSH per-connection server daemon (10.200.16.10:58208). Mar 4 00:47:35.262136 sshd[6933]: Accepted publickey for core from 10.200.16.10 port 58208 ssh2: RSA SHA256:m77LwF62I0XCESiszQRGie5jYIfHleFyYd3Z4r8PTJA Mar 4 00:47:35.263288 sshd[6933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:47:35.267481 systemd-logind[1698]: New session 30 of user core. Mar 4 00:47:35.276477 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 4 00:47:35.677210 sshd[6933]: pam_unix(sshd:session): session closed for user core Mar 4 00:47:35.680674 systemd[1]: sshd@27-10.200.20.21:22-10.200.16.10:58208.service: Deactivated successfully. Mar 4 00:47:35.682500 systemd[1]: session-30.scope: Deactivated successfully. Mar 4 00:47:35.683334 systemd-logind[1698]: Session 30 logged out. Waiting for processes to exit. Mar 4 00:47:35.684352 systemd-logind[1698]: Removed session 30.